Scientific Agriculture: the 20th Century

Agricultural technology developed more rapidly in the 20th century than in all previous history. Though the most important developments during the first half of the century took place in the industrial countries, especially the United States, the picture changed somewhat after the 1950s. With the coming of independence, former colonies in Africa and Asia initiated large-scale efforts to improve their agriculture. In many cases they used considerable ingenuity in adapting Western methods to their own climates, soils, and crops.

Developments in power: the internal-combustion engine

The internal-combustion engine brought major changes to agriculture in most of the world. In advanced regions it soon became the chief power source for the farm.

The tractor

The first applications to agriculture of the four-stroke-cycle gasoline engine were as stationary engines, at first in Germany, later elsewhere. By the 1890s stationary engines were mounted on wheels to make them portable, and soon a drive was added to make them self-propelled. The first successful gasoline tractor was built in the United States in 1892. Within a few years several companies were manufacturing tractors in Germany, the United Kingdom, and the United States. The number of tractors in the more developed countries increased dramatically during the 20th century, especially in the United States: in 1907 some 600 tractors were in use, but the figure had grown to almost 3,400,000 by 1950.

Major changes in tractor design throughout the 20th century produced a much more efficient and useful machine. Principal among these were the power takeoff, introduced in 1918, in which power from the tractor's engine could be transmitted directly to an implement through the use of a special shaft; the all-purpose, or tricycle-type, tractor (1924), which enabled farmers to cultivate planted crops mechanically; rubber tires (1932), which facilitated faster operating speeds; and the switch to four-wheel drives and diesel power in the 1950s and 1960s, which greatly increased the tractor's pulling power. The last innovations have led to the development of enormous tractors—usually having double tires on each wheel and enclosed, air-conditioned cabs—that can pull several gangs of plows.

Unit machinery

After World War II, there was an increase in the use of self-propelled machines in which the motive power and the equipment for performing a particular task formed one unit. Though the grain combine is the most important of these single-unit machines, self-propelled units are also in use for spraying, picking cotton, baling hay, picking corn, and harvesting tomatoes, lettuce, sugar beets, and many other crops. These machines are faster, easier to operate, and above all, have lower labour requirements than those that are powered by a separate tractor.

Grain combine

The first successful grain combine, a machine that cuts ripe grain and separates the kernels from the straw, was built in the United States in 1836. Lack of an adequate power unit and the tendency of combined grain to spoil because of excessive moisture limited its development, however. Large combines, powered by as many as 40 horses, were used in California in the latter part of the 19th century. Steam engines replaced horses on some units as a power source, but, about 1912, the gasoline engine began to replace both horses and steam for pulling the combine and operating its mechanism. A one-man combine, powered by a two-plow-sized tractor (i.e., one large enough to pull two plows), was developed in 1935. This was followed by a self-propelled machine in 1938.

Mechanized equipment for corn

Corn (maize), the most important single crop in the United States and extremely important in many other countries, is grown commercially with the aid of equipment operated by tractors or by internal-combustion engines mounted on the machines. Maize pickers came into use in the U.S. Corn Belt after World War I and were even more widely adopted after World War II. These pickers vary in complexity from the snapper-type harvester, which removes the ears from the stalks but does not husk them, to the picker-sheller, which not only removes the husk but shells the grain from the ear. The latter is often used in conjunction with dryers. Modern machines can harvest as many as 12 rows of corn at a time.

Mechanized equipment for cotton

Mechanization has also reduced substantially the labour needed to grow cotton. Equipment includes tractor, two-row stalk-cutter, disk (to shred the stalks), bedder (to shape the soil into ridges or seedbeds), planter, cultivator, sprayer, and harvester. Cotton fibre is harvested by a stripper-type harvester, developed in the 1920s, or by a picker. The stripper strips the entire plant of both open and unopened bolls and collects many leaves and stems. Though a successful cotton picker that removed the seed cotton from the open bolls and left the burrs on the plant was invented in 1927, it did not come into use until after World War II. Strippers are used mostly in dry regions, while pickers are employed in humid, warm areas. The pickers are either single-row machines mounted on tractors or two-row self-propelled machines.

Tomato-harvesting equipment

The self-propelled mechanical tomato harvester, developed in the early 1960s by engineers working in cooperation with plant breeders, handles virtually all packing tomatoes grown in California. Harvesters using electronic sorters can further reduce labour requirements.

Automobiles, trucks, and airplanes

The automobile and truck have also had a profound effect upon agriculture and farm life. Since their appearance on American farms between 1913 and 1920, trucks have changed patterns of production and marketing of farm products. Trucks deliver such items as fertilizer, feed, and fuels; go into the fields as part of the harvest equipment; and haul the crops to markets, warehouses, or packing and processing plants. Most livestock is trucked to market.

The airplane may have been used agriculturally in the United States as early as 1918 to distribute poison dust over cotton fields that were afflicted with the pink bollworm. While records of this experiment are fragmentary, it is known that airplanes were used to locate and map cotton fields in Texas in 1919. In 1921 a widely publicized dusting experiment took place near Dayton, Ohio. Army pilots, working with Ohio entomologists, dusted a six-acre (2.5-hectare) grove of catalpa trees with arsenate of lead to control the sphinx caterpillar. The experiment was successful. It and others encouraged the development of dusting and spraying, mainly to control insects, disease, weeds, and brush. In recognition of the possible long-term harmful effects of some of the chemicals, aerial dusting and spraying have been subject to various controls since the 1960s.

Airplanes are also used to distribute fertilizer, to reseed forest terrain, and to control forest fires. Many rice growers use planes to seed, fertilize, and spray pesticides, and even to hasten crop ripening by spraying hormones from the air.

During heavy storms, airplanes have dropped baled hay to cattle stranded in snow. Airplanes have also been used to transport valuable breeding stock, particularly in Europe. Valuable and perishable farm products are frequently transported by air. Airplanes are especially valuable in such large agricultural regions as western Canada and Australia, where they provide almost every type of service to isolated farmers.

New crops and techniques

New crops and techniques are, in reality, modifications of the old. Soybeans, sugar beets, and grain sorghums, for example, all regarded as new crops, are new only in the sense that they are now grown in wider areas and have different uses from those of earlier times. Such techniques as terracing, dry farming, and irrigation are nearly as old as the practice of agriculture itself, but their widespread application is still increasing productivity in many parts of the world.

New crops

The soybean

This is an outstanding example of an ages-old crop that, because of the development of new processes to make its oil and meal more useful, is widely produced today. In the East, where the soybean originated long ago, more than half the crop is used directly for food, and less than a third is pressed for oil. Its high protein and fat content make it a staple in the diet, replacing or supplementing meat for millions of people.

Though first reported grown in America in 1804, the soybean remained a rare garden plant for nearly 100 years. Around the beginning of the 20th century, when three new varieties were introduced from Japan, U.S. farmers began growing it for hay, pasture, and green manure. In the early 1930s a soybean oil processing method that eliminated a disagreeable odour from the finished product was developed. World War II brought an increased demand for edible oil. The food industry began using soybean oil for margarine, shortening, salad oil, mayonnaise, and other food products and continues to be its chief user. Manufacturers of paints, varnishes, and other drying oil products are the most important nonfood users.

Development of the solvent process of extracting soybean oil has greatly increased the yield. A 60-pound bushel of soybeans processed by this method yields 10 1/2 pounds of oil and 45 pounds of meal. Soybean meal and cake are used chiefly for livestock feed in the United States. The high protein content of the meal has made it an attractive source of industrial protein, and, with proper processing, it is an excellent source of protein for humans. In 2014 the United States and Brazil were the world's largest soybean producers.

Development of new soybean varieties suited for different parts of the world is possible by means of hybridization and genetic modification. Hybridization permits isolating types that are superior in yielding ability, resistance to lodging (breakage of the plant by wind and rain) and shattering (of the bean), adaptation to suit various requirements for maturity, and resistance to disease. Genetically modified soybeans are engineered to be resistant to glyphosate, a herbicide, and are among the most widely cultivated genetically modified organisms (GMOs).

Sorghum

Just as the soybean was used for many centuries in Asia before its introduction into the Western world, so sorghum was a major crop in Africa. Sorghum is fifth in importance among the world's cereals, coming after wheat, rice, corn, and barley. It is called by a variety of names including Guinea corn in West Africa, kafir corn in South Africa, durra in Sudan and South Sudan, and mtama in East Africa. In India it is known as jowar, cholam, and great millet, and it is called gaoliang in China. In the United States it is often called milo, while the sweet-stemmed varieties are referred to as sweet sorghum or sorgo.

Sorghum probably was domesticated in Ethiopia about 3,000 years ago. From there it spread to West and East Africa and then southward. Traders from Africa to the East carried sorghum as provisions on their dhows. It is likely that sorghum thus reached India, where cultivation began between 1,500 and 1,000 years ago. Other traders carried sorghum to China and the other countries of East Asia. The amber sorghums, or sorgos, useful for forage and syrup, may have moved by sea while the grain sorghums probably moved overland. The movement to the Mediterranean and Southwest Asia also began through traders.

Sorghum reached the Americas through the slave trade. Guinea corn and chicken corn came from West Africa to America as provisions for the slaves. Other types were introduced into the United States by seedsmen and scientists from about 1870 to 1910. Seed was sometimes sold to farmers as a highly productive new variety of corn. It was not until the 1930s, after the value of the plant as grain, forage, and silage for livestock feeding had been recognized, that acreage began to increase. Yields rose markedly in the late 1950s, after successful hybridization of the crop. Better yields led in turn to increased acreage.

Chinese ambercane was brought from France to the United States in 1854 and was distributed to farmers. While the cane provided good forage for livestock, promoters of the new crop were most interested in refining sugar from the sorghum molasses, a goal that persisted for many years. While refining technology has been perfected, the present cost of sorghum sugar does not permit it to compete with sugar from other sources.

Large amounts of sorghum grain are eaten every year by people of many countries. If the world population continues to grow as projected, food is likely to be sorghum's most important use. Most of the sorghum is ground into flour, often at home. Some is consumed as a whole-kernel food. Some of the grain is used for brewing beer, particularly in Africa.

The sugar beet

The sugar beet as a crop is much newer than either soybeans or sorghum. Although beets had been a source of sweets among ancient Egyptians, Indians, Chinese, Greeks, and Romans, it was not until 1747 that a German apothecary, Andreas Marggraf, obtained sugar crystals from the beet. Some 50 years later Franz Karl Achard, son of a French refugee in Prussia and student of Marggraf, improved the Silesian stock beet—probably a mangel-wurzel—as a source of sugar. He erected the first pilot beet-sugar factory at Cunern, Silesia (now in Poland), in 1802. Thus began the new use for sugar of a crop traditionally used as animal feed.

When during the Napoleonic Wars continental Europe was cut off from West Indies cane sugar, further experimentation with beet sugar was stimulated. In 1808 a French scientist, Benjamin Delessert, used charcoal in clarification, which insured the technical success of beet sugar. On March 25, 1811, Napoleon issued a decree that set aside 80,000 acres (about 32,375 hectares) of land for the production of beets, established six special sugar-beet schools to which 100 select students were given scholarships, directed construction of 10 new factories, and appropriated substantial bounties to encourage the peasants to grow beets. By 1814, 40 small factories were in operation in France, Belgium, Germany, and Austria. Although the industry declined sharply after Napoleon's defeat, it was soon revived. For the last third of the 19th century, beets replaced cane as the leading source of sugar.

Since World War II, major changes have taken place in sugar-beet production in the United States and, to a lesser extent, in Germany and other countries with a substantial production. These changes may be illustrated by developments in the United States.

In 1931 the California Agricultural Experiment Station and the U.S. Department of Agriculture undertook a cooperative study of the mechanization of sugar-beet growing and harvesting. The goal in harvesting was a combine that would perform all the harvesting operations—lifting from the soil, cutting the tops, and loading—in one trip down the row. By the end of World War II, four different types of harvesters were being manufactured.

The spring and summer operations—planting, blocking (cutting out all plants except for clumps standing 10 or 12 inches [25 or 30 centimetres] apart), thinning, and weeding—did not yield so easily to mechanization, largely because the beet seed, a multigerm seedball, produced several seedlings, resulting in dense, clumpy, and somewhat irregular stands. In 1941 a machine for segmenting the seedball was developed. The problem was solved in 1948, when a plant with a true single-germ seed was discovered in Oregon. Now precision seed drills could be used, and plants could be first blocked and then cultivated mechanically using a cross-cultivating technique—i.e., cultivating the rows up and down and then across the field. During World War I, 11.2 hours of labour were required to produce a ton of sugar beets; in 1964, 2.7 hours were needed.

New techniques

As the development of the sugar beet shows, new techniques may bring particular crops into prominence. This discussion, however, is confined to three that, in some forms, are old yet today are transforming agriculture in many parts of the world.

Terracing

Terracing, which is basically grading steep land, such as hillsides, into a series of level benches, was known in antiquity and was practiced thousands of years ago in such divergent areas as the Philippines, Peru, and Central Africa. Today, terracing is of major importance in Japan, Mexico, and parts of the United States, while many other countries, including Israel, Australia, South Africa, Colombia, and Brazil, are increasing productivity through the inauguration of this and other soil-conserving practices.

Colombia provides an example of the modern need for terracing. For many years, the steep slopes used for producing the world-renowned Colombian coffee have been slowly eroding. During the 1960s, experimental work showed that contour planting and terracing would help preserve the land. Farther south, the Brazilian state of São Paulo created a terracing service in 1938. Since then, the program has become a full conservation service.

Irrigation

The usefulness of a full-scale conservation project is seen in the Snowy Mountains Scheme of Australia (1949–74), where three river systems were diverted to convert hundreds of miles of arid but fertile plains to productive land. Intensive soil conservation methods were undertaken wherever the natural vegetation and soil surface had been disturbed. Drainage is controlled by stone and steel drains, grassed waterways, absorption and contour terraces, and settling ponds. Steep slopes are stabilized by woven wickerwork fences, brush matting, and bitumen sprays, followed by revegetation with white clover and willow and poplar trees. Grazing is strictly controlled to prevent silting of the reservoirs and damage to slopes. The two main products of the plan are power for new industries and irrigation water for agriculture, with recreation and a tourist industry as important by-products.

Australia's Snowy Mountains Scheme is a modern successor, so far as irrigation is concerned, to practices that have provided water for crops almost from the beginnings of agriculture. The simplest method of irrigation was to dip water from a well or spring and pour it on the land. Many types of buckets, ropes, and, later, pulleys were employed. The ancient shadoof, which consists of a long pole pivoted from a beam that has a weight at one end to lift a full bucket of water at the other, is still in use. Conduction of water through ditches from streams was practiced widely in Southwest Asia, in Africa, and in the Americas, where ancient canal systems can be seen. A conduit the Romans built 2,000 years ago to provide a water supply to Tunis is still in use.

Sufficient water at the proper time makes possible the full use of technology in farming—including the proper application of fertilizers, suitable crop rotations, and the use of more productive varieties of crops. Expanding irrigation is an absolute necessity to extend crop acreage in significant amounts; it may be the most productive of possible improvements on present cropland. First, there is the possibility of making wider use of irrigation in districts that already have a high rate of output. Second, there is the possibility of irrigating nonproductive land, especially in arid zones. The greatest immediate economic returns might well come from irrigating productive districts, but irrigation of arid zones has a larger long-range appeal. Most of the arid zones, occupying more than one-third of the landmass of the globe, are in the tropics. Generally, they are rich in solar energy, and their soils are rich in nutrients, but they lack water.

Supplemental irrigation in the United States, used primarily to make up for poor distribution of rainfall during the growing season, has increased substantially since the late 1930s. This irrigation is carried on in the humid areas of the United States almost exclusively with sprinkler systems. The water is conveyed in pipes, usually laid on the surface of the field, and the soil acts as a storage reservoir. The water itself is pumped from a stream, lake, well, or reservoir. American farmers first used sprinkler irrigation about 1900, but the development of lightweight aluminum pipe with quick couplers meant that the pipe could be moved easily and quickly from one location to another, resulting in a notable increase in the use of sprinklers after World War II.

India, where irrigation has been practiced since ancient times, illustrates some of the problems. During the late 20th century, more than 20 percent of the country's cultivated area was under irrigation. Both large dams, with canals to distribute the water, and small tube, or driven, wells, made by driving a pipe into water or water-bearing sand, controlled by individual farmers, have been used. Some have been affected by salinity, however, as water containing dissolved salts has been allowed to evaporate in the field. Tube wells have helped in these instances by lowering the water table and by providing sufficient water to flush away the salts. The other major problem has been to persuade Indian farmers to level their lands and build the small canals needed to carry the water over the farms. In Egypt, impounding of the Nile River with the Aswān High Dam has been a great boon to agriculture, but it has also reduced the flow of silt into the Nile Valley and adversely affected fishing in the Mediterranean Sea. In arid areas such as the U.S. Southwest, tapping subterranean water supplies has resulted in a lowered water table and, in some instances, land subsidence.

Dry farming

The problem of educating farmers to make effective use of irrigation water is found in many areas. An even greater educational effort is required for dry farming; that is, crop production without irrigation where annual precipitation is less than 20 inches (50 cm).

Dry farming as a system of agriculture was developed in the Great Plains of the United States early in the 20th century. It depended on the efficient storage of the limited moisture in the soil and the selection of crops and growing methods that made best use of this moisture. The system included deep fall plowing, subsurface packing of the soil, thorough cultivation both before and after seeding, light seeding, and alternating-summer fallow, with the land tilled during the season of fallow as well as in crop years. In certain latitudes stubble was left in the fields after harvest to trap snow. Though none of the steps were novel, their systematic combination was new. Systematic dry farming has continued, with substantial modifications, in the Great Plains of Canada and the United States, in Brazil, in South Africa, in Australia, and elsewhere. It is under continuing research by the Food and Agriculture Organization of the United Nations.

The direction of change

While no truly new crop has been developed in modern times, new uses and new methods of cultivation of known plants may be regarded as new crops. For example, subsistence and special-use plants, such as the members of the genus Atriplex that are salt-tolerant, have the potential for being developed into new crops. New techniques, too, are the elaboration and systematization of practices from the past.

New strains: genetics

The use of genetics to develop new strains of plants and animals has brought major changes in agriculture since the 1920s. Genetics as the science dealing with the principles of heredity and variation in plants and animals was established only at the beginning of the 20th century. Its application to practical problems came later.

Early work in genetics

The modern science of genetics and its application to agriculture has a complicated background, built up from the work of many individuals. Nevertheless, Gregor Mendel is generally credited with its founding. Mendel, a monk in Brünn, Moravia (now Brno, Czech Republic), purposefully crossed garden peas in his monastery garden. He carefully sorted the progeny of his parent plants according to their characteristics and counted the number that had inherited each quality. He discovered that when the qualities he was studying, including flower colour and shape of seeds, were handed on by the parent plants, they were distributed among the offspring in definite mathematical ratios, from which there was never a significant variation. Definite laws of inheritance were thus established for the first time. Though Mendel reported his discoveries in an obscure Austrian journal in 1866, his work was not followed up for a third of a century. Then in 1900, investigators in the Netherlands, Germany, and Austria, all working on inheritance, independently rediscovered Mendel's paper.

By the time Mendel's work was again brought to light, the science of genetics was in its first stages of development. The word genetics comes from genes, the name given to the minute quantities of living matter that transmit characteristics from parent to offspring. By 1903 scientists in the United States and Germany had concluded that genes are carried in the chromosomes, nuclear structures visible under the microscope. In 1911 a theory that the genes are arranged in a linear file on the chromosomes and that changes in this conformation are reflected in changes in heredity was announced.

Genes are highly stable. During the processes of sexual reproduction, however, means are present for assortment, segregation, and recombination of genetic factors. Thus, tremendous genetic variability is provided within a species. This variability makes possible the changes that can be brought about within a species to adapt it to specific uses. Occasional mutations (spontaneous changes) of genes also contribute to variability.

Development of new strains of plants and animals did not, of course, await the science of genetics, and some advances were made by empirical methods even after the application of genetic science to agriculture. The U.S. plant breeder Luther Burbank, without any formal knowledge of genetic principles, developed the Burbank potato as early as 1873 and continued his plant-breeding research, which produced numerous new varieties of fruits and vegetables. In some instances, both practical experience and scientific knowledge contributed to major technological achievements. An example is the development of hybrid corn.

Maize, or corn

Maize originated in the Americas, having been first developed by Indians in the highlands of Mexico. It was quickly adopted by the European settlers, Spanish, English, and French. The first English settlers found the northern Indians growing a hard-kerneled, early-maturing flint variety that kept well, though its yield was low. Indians in the south-central area of English settlement grew a soft-kerneled, high-yielding, late-maturing dent corn. There were doubtless many haphazard crosses of the two varieties. In 1812, however, John Lorain, a farmer living near Philipsburg, Pa., consciously mixed the two and demonstrated that certain mixtures would result in a yield much greater than that of the flint, yet with many of the flint's desirable qualities. Other farmers and breeders followed Lorain's example, some aware of his pioneer work, some not. The most widely grown variety of the Corn Belt for many years was Reid's Yellow Dent, which originated from a fortuitous mixture of a dent and a flint variety.

At the same time, other scientists besides Mendel were conducting experiments and developing theories that were to lead directly to hybrid maize. In 1876 Charles Darwin published the results of experiments on cross- and self-fertilization in plants. Carrying out his work in a small greenhouse in his native England, the man who is best known for his theory of evolution found that inbreeding usually reduced plant vigour and that crossbreeding restored it.

Darwin's work was studied by a young American botanist, William James Beal, who probably made the first controlled crosses between varieties of maize for the sole purpose of increasing yields through hybrid vigour. Beal worked successfully without knowledge of the genetic principle involved. In 1908 George Harrison Shull concluded that self-fertilization tended to separate and purify strains while weakening the plants but that vigour could be restored by crossbreeding the inbred strains. Another scientist found that inbreeding could increase the protein content of maize, but with a marked decline in yield. With knowledge of inbreeding and hybridization at hand, scientists had yet to develop a technique whereby hybrid maize with the desired characteristics of the inbred lines and hybrid vigour could be combined in a practical manner. In 1917 Donald F. Jones of the Connecticut Agricultural Experiment Station discovered the answer, the double cross.

The double cross was the basic technique used in developing modern hybrid maize and has been used by commercial firms since. Jones's invention was to use four inbred lines instead of two in crossing. Simply, inbred lines A and B made one cross, lines C and D another. Then AB and CD were crossed, and a double-cross hybrid, ABCD, was the result. This hybrid became the seed that changed much of American agriculture. Each inbred line was constant both for certain desirable and for certain undesirable traits, but the practical breeder could balance his four or more inbred lines in such a way that the desirable traits outweighed the undesirable. Foundation inbred lines were developed to meet the needs of varying climates, growing seasons, soils, and other factors. The large hybrid seed-corn companies undertook complex applied-research programs, while state experiment stations and the U.S. Department of Agriculture tended to concentrate on basic research.

The first hybrid maize involving inbred lines to be produced commercially was sold by the Connecticut Agricultural Experiment Station in 1921. The second was developed by Henry A. Wallace, a future secretary of agriculture and vice president of the United States. He sold a small quantity in 1924 and, in 1926, organized the first seed company devoted to the commercial production of hybrid maize.

Many Midwestern farmers began growing hybrid maize in the late 1920s and 1930s, but it did not dominate corn production until World War II. In 1933, 1 percent of the total maize acreage was planted with hybrid seed. By 1939 the figure was 15 percent, and in 1946 it rose to 69. The percentage was 96 in 1960. The average per acre yield of maize rose from 23 bushels (2,000 litres per hectare) in 1933, to 83 bushels (7,220 litres per hectare) by 1980.

The techniques used in breeding hybrid maize have been successfully applied to grain sorghum and several other crops. New strains of most major crops are developed through plant introductions, crossbreeding, and selection, however, because hybridization in the sense used with maize and grain sorghums has not been successful with several other crops.

Wheat

Advances in wheat production during the 20th century included improvements through the introduction of new varieties and strains; careful selection by farmers and seedsmen, as well as by scientists; and crossbreeding to combine desirable characteristics. The adaptability of wheat enables it to be grown in almost every country of the world. In most of the developed countries producing wheat, endeavours of both government and wheat growers have been directed toward scientific wheat breeding.

The development of the world-famous Marquis wheat in Canada, released to farmers in 1900, came about through sustained scientific effort. Sir Charles Saunders, its discoverer, followed five principles of plant breeding: (1) the use of plant introductions; (2) a planned crossbreeding program; (3) the rigid selection of material; (4) evaluation of all characteristics in replicated trials; and (5) testing varieties for local use. Marquis was the result of crossing a wheat long grown in Canada with a variety introduced from India. For 50 years, Marquis and varieties crossbred from Marquis dominated hard red spring wheat growing in the high plains of Canada and the United States and were used in other parts of the world.

In the late 1940s a short-stemmed wheat was introduced from Japan into a more favourable wheat-growing region of the U.S. Pacific Northwest. The potential advantage of the short, heavy-stemmed plant was that it could carry a heavy head of grain, generated by the use of fertilizer, without falling over or lodging (being knocked down). Early work with the variety was unsuccessful; it was not adaptable directly into U.S. fields. Finally, by crossing the Japanese wheat with acceptable varieties in the Palouse Valley in Washington, there resulted the first true semidwarf wheat in the United States to be commercially grown under irrigation and heavy applications of fertilizer. This first variety, Gaines, was introduced in 1962, followed by Nugaines in 1966. The varieties now grown in the United States commonly produce 100 bushels per acre (8,700 litres per hectare), and world records of more than 200 bushels per acre have been established.

The Rockefeller Foundation in 1943 entered into a cooperative agricultural research program with the government of Mexico, where wheat yields were well below the world average. By 1956 per acre yield had doubled, mainly because of newly developed varieties sown in the fall instead of spring and the use of fertilizers and irrigation. The short-stemmed varieties developed in the Pacific Northwest from the Japanese strains were then crossed with various Mexican and Colombian wheats. By 1965 the new Mexican wheats were established, and they gained an international reputation.

Rice

The success of the wheat program led the Rockefeller and Ford foundations in 1962 to establish the International Rice Research Institute at Los Baños in the Philippines. A research team assembled some 10,000 strains of rice from all parts of the world and began outbreeding. Success came early with the combination of a tall, vigorous variety from Indonesia and a dwarf rice from Taiwan. The strain IR-8 has proved capable of doubling the yield obtained from most local rices in Asia.

Genetic engineering

The application of genetics to agriculture since World War II has resulted in substantial increases in the production of many crops. This has been most notable in hybrid strains of maize and grain sorghum. At the same time, crossbreeding has resulted in much more productive strains of wheat and rice. Called artificial selection or selective breeding, these techniques have become aspects of a larger and somewhat controversial field called genetic engineering. Of particular interest to plant breeders has been the development of techniques for deliberately altering the functions of genes by manipulating the recombination of DNA. This has made it possible for researchers to concentrate on creating plants that possess attributes—such as the ability to use free nitrogen or to resist diseases—that they did not have naturally.

Pest and disease control in crops

Beginnings of pest control

Wherever agriculture has been practiced, pests have attacked, destroying part or even all of the crop. In modern usage, the term pest includes animals (mostly insects), fungi, plants, bacteria, and viruses. Human efforts to control pests have a long history. Even in Neolithic times (about 7000 bp), farmers practiced a crude form of biological pest control involving the more or less unconscious selection of seed from resistant plants. Severe locust attacks in the Nile Valley during the 13th century bp are dramatically described in the Bible, and, in his Natural History, the Roman author Pliny the Elder describes picking insects from plants by hand and spraying. The scientific study of pests was not undertaken until the 17th and 18th centuries. The first successful large-scale conquest of a pest by chemical means was the control of the vine powdery mildew (Unciluna necator) in Europe in the 1840s. The disease, brought from the Americas, was controlled first by spraying with lime sulfur and, subsequently, by sulfur dusting.

Another serious epidemic was the potato blight that caused famine in Ireland in 1845 and some subsequent years and severe losses in many other parts of Europe and the United States. Insects and fungi from Europe became serious pests in the United States, too. Among these were the European corn borer, the gypsy moth, and the chestnut blight, which practically annihilated that tree.

The first book to deal with pests in a scientific way was John Curtis's Farm Insects, published in 1860. Though farmers were well aware that insects caused losses, Curtis was the first writer to call attention to their significant economic impact. The successful battle for control of the Colorado potato beetle (Leptinotarsa decemlineata) of the western United States also occurred in the 19th century. When miners and pioneers brought the potato into the Colorado region, the beetle fell upon this crop and became a severe pest, spreading steadily eastward and devastating crops, until it reached the Atlantic. It crossed the ocean and eventually established itself in Europe. But an American entomologist in 1877 found a practical control method consisting of spraying with water-insoluble chemicals such as London Purple, paris green, and calcium and lead arsenates.

Other pesticides that were developed soon thereafter included nicotine, pyrethrum, derris, quassia, and tar oils, first used, albeit unsuccessfully, in 1870 against the winter eggs of the Phylloxera plant louse. The Bordeaux mixture fungicide (copper sulfate and lime), discovered accidentally in 1882, was used successfully against vine downy mildew; this compound is still employed to combat it and potato blight. Since many insecticides available in the 19th century were comparatively weak, other pest-control methods were used as well. A species of ladybird beetle, Rodolia cardinalis, was imported from Australia to California, where it controlled the cottony-cushion scale then threatening to destroy the citrus industry. A moth introduced into Australia destroyed the prickly pear, which had made millions of acres of pasture useless for grazing. In the 1880s the European grapevine was saved from destruction by grape phylloxera through the simple expedient of grafting it onto certain resistant American rootstocks.

This period of the late 19th and early 20th centuries was thus characterized by increasing awareness of the possibilities of avoiding losses from pests, by the rise of firms specializing in pesticide manufacture, and by development of better application machinery.

Pesticides as a panacea: 1942–62

In 1942 the Swiss chemist Paul Hermann Müller discovered the insecticidal properties of a synthetic chlorinated organic chemical, dichlorodiphenyltrichloroethane, which was first synthesized in 1874 and subsequently became known as DDT. Müller received the Nobel Prize for Physiology or Medicine in 1948 for his discovery. DDT was far more persistent and effective than any previously known insecticide. Originally a mothproofing agent for clothes, it soon found use among the armies of World War II for killing body lice and fleas. It stopped a typhus epidemic threatening Naples. Müller's work led to discovery of other chlorinated insecticides, including aldrin, introduced in 1948; chlordane (1945); dieldrin (1948); endrin (1951); heptachlor (1948); methoxychlor (1945); and Toxaphene (1948).

Research on poison gas in Germany during World War II led to the discovery of another group of yet more powerful insecticides and acaricides (killers of ticks and mites)—the organophosphorus compounds, some of which had systemic properties; that is, the plant absorbed them without harm and became itself toxic to insects. The first systemic was octamethylpyrophosphoramide, trade named Schradan. Other organophosphorus insecticides of enormous power were also made, the most common being diethyl-p-nitrophenyl monothiophosphate, named parathion. Though low in cost, these compounds were toxic to humans and other warm-blooded animals. The products could poison by absorption through the skin, as well as through the mouth or lungs, thus, spray operators must wear respirators and special clothing. Systemic insecticides need not be carefully sprayed, however; the compound may be absorbed by watering the plant.

Though the advances made in the fungicide field in the first half of the 20th century were not as spectacular as those made with insecticides and herbicides, certain dithiocarbamates, methylthiuram disulfides, and thaladimides were found to have special uses. It began to seem that almost any pest, disease, or weed problem could be mastered by suitable chemical treatment. Farmers foresaw a pest-free millennium. Crop losses were cut sharply; locust attack was reduced to a manageable problem; and the new chemicals, by killing carriers of human disease, saved the lives of millions of people.

Problems appeared in the early 1950s. In cotton crops standard doses of DDT, parathion, and similar pesticides were found ineffective and had to be doubled or trebled. Resistant races of insects had developed. In addition, the powerful insecticides often destroyed natural predators and helpful parasites along with harmful insects. Insects and mites can reproduce at such a rapid rate that often when natural predators were destroyed by a pesticide treatment, a few pest survivors from the treatment, unchecked in breeding, soon produced worse outbreaks of pests than there had been before the treatment; sometimes the result was a population explosion to pest status of previously harmless insects.

At about the same time, concern also began to be expressed about the presence of pesticide residues in food, humans, and wildlife. It was found that many birds and wild mammals retained considerable quantities of DDT in their bodies, accumulated along their natural food chains. The disquiet caused by this discovery was epitomized in 1962 by the publication in the United States of a book entitled Silent Spring, whose author, Rachel Carson, attacked the indiscriminate use of pesticides, drew attention to various abuses, and stimulated a reappraisal of pest control. Thus began a new integrated approach, which was in effect a return to the use of all methods of control in place of a reliance on chemicals alone.

Integrated control

Some research into biological methods was undertaken by governments, and in many countries plant breeders began to develop and patent new pest-resistant plant varieties.

One method of biological control involved the breeding and release of males sterilized by means of gamma rays. Though sexually potent, such insects have inactive sperm. Released among the wild population, they mate with the females, who either lay sterile eggs or none at all. The method was used with considerable success against the screwworm, a pest of cattle, in Texas. A second method of biological control employed lethal genes. It is sometimes possible to introduce a lethal or weakening gene into a pest population, leading to the breeding of intersex (effectively neuter) moths or a predominance of males. Various studies have also been made on the chemical identification of substances attracting pests to the opposite sex or to food. With such substances traps can be devised that attract only a specific pest species. Finally, certain chemicals have been fed to insects to sterilize them. Used in connection with a food lure, these can lead to the elimination of a pest from an area. Chemicals tested so far, however, have been considered too dangerous to humans and other mammals for any general use.

Some countries (notably the United States, Sweden, and the United Kingdom) have partly or wholly banned the use of DDT because of its persistence and accumulation in human body fat and its effect on wildlife. New pesticides of lesser human toxicity have been found, one of the most used being mercaptosuccinate, trade named Malathion. A more recent important discovery was the systemic fungicide, absorbed by the plant and transmitted throughout it, making it resistant to certain diseases.

The majority of pesticides are sprayed on crops as solutions or suspensions in water. Spraying machinery has developed from the small hand syringes and garden engines of the 18th century to the very powerful autoblast machines of the 1950s that were capable of applying up to some 400 gallons per acre (4,000 litres per hectare). Though spraying suspended or dissolved pesticide was effective, it involved moving a great quantity of inert material for only a relatively small amount of active ingredient. Low-volume spraying was invented about 1950, particularly for the application of herbicides, in which 10 or 20 gallons of water, transformed into fine drops, would carry the pesticide. Ultralow-volume spraying has also been introduced; four ounces (about 110 grams) of the active ingredient itself (usually Malathion) are applied to an acre from aircraft. The spray as applied is invisible to the naked eye.