Thursday, February 09, 2006

Human Population: Fundamentals of Growth

Human Population: Fundamentals of Growth
Population Growth and Distribution


In 2000, the world had 6.1 billion human inhabitants. This number could rise to more than 9 billion in the next 50 years. For the last 50 years, world population multiplied more rapidly than ever before, and more rapidly than it will ever grow in the future.

Anthropologists believe the human species dates back at least 3 million years. For most of our history, these distant ancestors lived a precarious existence as hunters and gatherers. This way of life kept their total numbers small, probably less than 10 million. However, as agriculture was introduced, communities evolved that could support more people.

World population expanded to about 300 million by A.D. 1 and continued to grow at a moderate rate. But after the start of the Industrial Revolution in the 18th century, living standards rose and widespread famines and epidemics diminished in some regions. Population growth accelerated. The population climbed to about 760 million in 1750 and reached 1 billion around 1800 (see chart, "World population growth, 1750–2150,").

World Population Distribution by Region, 1800–2050

Source: United Nations Population Division, Briefing Packet, 1998 Revision of World Population Prospects.

In 1800, the vast majority of the world's population (86 percent) resided in Asia and Europe, with 65 percent in Asia alone (see chart, "World population distribution by region, 1800–2050"). By 1900, Europe's share of world population had risen to 25 percent, fueled by the population increase that accompanied the Industrial Revolution. Some of this growth spilled over to the Americas, increasing their share of the world total.

World population growth accelerated after World War II, when the population of less developed countries began to increase dramatically. After millions of years of extremely slow growth, the human population indeed grew explosively, doubling again and again; a billion people were added between 1960 and 1975; another billion were added between 1975 and 1987. Throughout the 20th century each additional billion has been achieved in a shorter period of time. Human population entered the 20th century with 1.6 billion people and left the century with 6.1 billion.

The growth of the last 200 years appears explosive on the historical timeline. The overall effects of this growth on living standards, resource use, and the environment will continue to change the world landscape long after.
Exponential Growth

As long ago as 1789, Thomas Malthus studied the nature of population growth in Europe. He claimed that population was increasing faster than food production, and he feared eventual global starvation. Of course he could not foresee how modern technology would expand food production, but his observations about how populations increase were important. Population grows geometrically (1, 2, 4, 8 …), rather than arithmetically (1, 2, 3, 4 …), which is why the numbers can increase so quickly.

A story said to have originated in Persia offers a classic example of exponential growth. It tells of a clever courtier who presented a beautiful chess set to his king and in return asked only that the king give him one grain of rice for the first square, two grains, or double the amount, for the second square, four grains (or double again) for the third, and so forth. The king, not being mathematically inclined, agreed and ordered the rice to be brought from storage. The eighth square required 128 grains, the 12th took more than one pound. Long before reaching the 64th square, every grain of rice in the kingdom had been used. Even today, the total world rice production would not be enough to meet the amount required for the final square of the chessboard. The secret to understanding the arithmetic is that the rate of growth (doubling for each square) applies to an ever-expanding amount of rice, so the number of grains added with each doubling goes up, even though the rate of growth is constant.

Similarly, if a country's population begins with 1 million and grows at a steady 3 percent annually, it will add 30,000 persons the first year, almost 31,000 the second year, and 40,000 by the 10th year. At a 3 percent growth rate, its doubling time — or the number of years to double in size — is 23 years. (The doubling time for a population can be roughly determined by dividing the current growth rate into the number "69." Therefore, 69/3=23 years. Of course, if a population's growth rate does not remain at this rate, the projected doubling time would need to be recalculated.)

The 2000 growth rate of 1.4 percent, when applied to the world's 6.1 billion population, yields an annual increase of about 85 million people. Because of the large and increasing population size, the number of people added to the global population will remain high for several decades, even as growth rates continue to decline.

Between 2000 and 2030, nearly 100 percent of this annual growth will occur in the less developed countries in Africa, Asia, and Latin America, whose population growth rates are much higher than those in more developed countries. Growth rates of 1.9 percent and higher mean that populations would double in about 36 years, if these rates continue. Demographers do not believe they will. Projections of growth rates are lower than 1.9 percent because birth rates are declining and are expected to continue to do so. The populations in the less developed regions will most likely continue to command a larger proportion of the world total. While Asia's share of world population may continue to hover around 55 percent through the next century, Europe's portion has declined sharply and could drop even more during the 21st century. Africa and Latin America each would gain part of Europe's portion. By 2100, Africa is expected to capture the greatest share (see chart, "World population distribution by region, 1800–2050", above).

The more developed countries in Europe and North America, as well as Japan, Australia, and New Zealand, are growing by less than 1 percent annually. Population growth rates are negative in many European countries, including Russia (-0.6%), Estonia (-0.5%), Hungary (-0.4%), and Ukraine (-0.4%). If the growth rates in these countries continue to fall below zero, population size would slowly decline. As the chart "World population growth, 1750–2150" shows, population increase in more developed countries is already low and is expected to stabilize.
Terms

Birth rate (or crude birth rate): The number of live births per 1,000 population in a given year. Not to be confused with the growth rate.

Doubling time: The number of years required for the population of an area to double its present size, given the current rate of population growth. Population doubling time is useful to demonstrate the long-term effect of a growth rate, but should not be used to project population size. Many more developed countries have very low growth rates and, as a result, the equation shows doubling times of hundreds or thousands of years. But these countries are not expected to ever double again. Most, in fact, likely have population declines in their future. Many less developed countries have high growth rates that are associated with short doubling times, but are expected to grow more slowly as birth rates are expected to continue to decline.

Growth rate: The number of persons added to (or subtracted from) a population in a year due to natural increase and net migration; expressed as a percentage of the population at the beginning of the time period.

Less developed countries: Less developed countries include all countries in Africa, Asia (excluding Japan), and Latin America and the Caribbean, and the regions of Melanesia, Micronesia, and Polynesia.

More developed countries: More developed countries include all countries in Europe, North America, Australia, New Zealand, and Japan.

Population Explosion Among Older Americans

http://www.infoplease.com/ipa/A0780132.html

Population Explosion Among Older Americans

The United States saw a rapid growth in its elderly population during the 20th century. The number of Americans aged 65 and older climbed above 34.9 million in 2000, compared with 3.1 million in 1900. For the same years, the ratio of elderly Americans to the total population jumped from 1 in 25 to 1 in 8. The trend is guaranteed to continue in the coming century as the baby-boom generation grows older. Between 1990 and 2020, the population aged 65 to 74 is projected to grow 74%.

The elderly population explosion is a result of impressive increases in life expectancy. When the nation was founded, the average American could expect to live to the age of 35. Life expectancy at birth had increased to 47.3 by 1900 and in 2000 stood at 76.9.

Along with the growth of the general elderly population has come a remarkable increase in the number of Americans reaching age 100. In 2000 there were 50,454 centenarians (people aged 100 or over), representing 1 out of every 5,578 people. In 1990 centenarians numbered 37,306 people, or 1 out of every 6,667 people.

Source: Based on U.S. Census Bureau data.

World Population Ageing: 1950-2050

http://www.un.org/esa/population/publications/worldageing19502050/

World Population Ageing: 1950-2050

This report was prepared by the Population Division as a contribution to the 2002 World Assembly on Ageing and its follow-up. The report provides a description of global trends in population ageing and includes a series of indicators of the ageing process by development regions, major areas, regions and countries. The report shows that:

Population ageing is unprecedented, without parallel in human history—and the twenty-first century will witness even more rapid ageing than did the century just past.

Population ageing is pervasive, a global phenomenon affecting every man, woman and child—but countries are at very different stages of the process, and the pace of change differs greatly. Countries that started the process later will have less time to adjust.

Population ageing is enduring: we will not return to the young populations that our ancestors knew.

Population ageing has profound implications for many facets of human life.

MOLECULAR NANOTECHNOLOGY FULLY LOADED WITH BENEFITS AND RISKS

http://www.smalltimes.com/document_display.cfm?document_id=7161

MOLECULAR NANOTECHNOLOGY FULLY LOADED WITH BENEFITS AND RISKS
By Mike Treder
The Futurist

Mike Treder, CRN executive director, serves on the boards of directors of the Human Futures Institute and the World Transhumanist Association.


Jan. 5, 2004 -- The future shock of rapid change and technology run amok described by Alvin Toffler in his 1970 best seller has perhaps been less debilitating for most people than predicted, but even Toffler could not have envisioned the tidal wave of change that will hit us when nanofactories make the scene.

Imagine a world with billions of desktop-size, portable, nonpolluting, cheap machines that can manufacture almost anything-from clothing to furniture to electronics, and much more – in just a few hours. Today, such devices do not exist. But in the years ahead, this advanced form of nanotechnology could create the next Industrial Revolution – or the world's worst nightmare.

The technology described in this article is molecular nanotechnology (MNT). This is a big step beyond most of today's nanotech research, which deals with exploring and exploiting the properties of materials at the nanoscale. Industry has begun using the term nanotechnology to cover almost any technology significantly smaller than microtechnology, such as those involving nanoparticles or nanomaterials. This broad field will produce important and useful results, but their societal effects – both positive and negative – will be modest compared with later stages of the technology.

MNT, by contrast, is about constructing shapes, machines, and products at the atomic level – putting them together molecule by molecule. With parts only a few nanometers wide, it may become possible to build a supercomputer smaller than a grain of sand, a weapon smaller than a mosquito, or a self-contained nanofactory that sits on your kitchen counter.

"Picture an automated factory, full of conveyor belts, computers, and swinging robot arms," writes scientist and engineer K. Eric Drexler, who first brought nanotechnology to public attention with his 1986 book "Engines of Creation." "Now imagine something like that factory, but a million times smaller and working a million times faster, with parts and workpieces of molecular size."

Unlike any machine ever built, the nanofactory will be assembled from the bottom up, constructed of specifically designed and placed molecules. Drexler says, "Nanotechnology isn't primarily about miniaturizing machines, but about extending precise control of molecular structures to larger and larger scales. Nanotechnology is about making precise things big."

Virtually every previous technological improvement has been accomplished by making things smaller and more precise. But as the scales at which we work get smaller and smaller, we approach limits imposed by physics. The smallest unit of matter we can build with is the atom, or combinations of atoms known as molecules. The earthshaking insight of molecular nanotechnology is that, when we reach this scale, we can reverse direction and begin building up, making products by placing individual atoms and molecules exactly where we want them.

Ever since Richard Feynman enunciated MNT's basic concepts in 1959, and especially since Drexler began detailing its amazing possibilities in the 1980s, proposals for building products in various ways have been put forth. Some of these have been fanciful and many have been impractical. At this point, it appears that the idea of a nanofactory is the safest and most useful method of building general-purpose products by molecular manufacturing.

Inside a Nanofactory

The inner architecture of a nanofactory will be a stunning achievement, outside the realm of anything previously accomplished. Nanofactories will make use of a vast number of moving parts, each designed and precisely constructed to do a specific job. Some of these parts will be visible to the human eye. Most will be microscopic or even nanoscale, smaller than a human cell. An important feature of a nanofactory is that all of its parts will be fixed in place. This is significant because it greatly simplifies development of the device. Engineers won't have to figure out how to tell each little nanobot in a swarm where to go and how to get there, and none of the parts can get lost or go wild.

Perhaps the easiest way to envision the inner workings of a nanofactory is to picture a large city, with all the streets laid out on a grid. Imagine that in this city everyone works together to build gigantic products-ocean liners, for instance. To build something that big, you have to start with small parts and put them together.

In this imaginary city, all the workers stand along the streets and pass the parts along to each other. The smallest parts are assembled on the narrowest side streets, and then handed up to the end of the block. Other small parts from other side streets are joined together to make medium-sized parts, which are joined together to make large parts. At the end, the largest parts converge in one place, where they are joined together to make the finished product.

A nanofactory performs in this way, with multiple assembly lines operating simultaneously and steadily feeding into each other.

The first and hardest step in building a nanofactory is building an assembler, a tiny device that can combine individual molecules into useful shapes. An early plan for molecular manufacturing imagined lots of free-floating assemblers working together to build a single massive product, molecule by molecule. A more efficient approach is to fasten down the assemblers in orderly arrays of chemical fabricators, instruct each fabricator to create a tiny piece of the product, and then fasten the pieces together, passing them along to the next level within the nanofactory.

A human-scale nanofactory will consist of trillions of fabricators, and it could only be built by another nanofactory. But at the beginning, an assembler could build a very small nanofactory, with just a few fabricators. A smaller nanofactory could build a bigger one, and so on. According to the best estimates we have today, a fabricator could make its own mass in just a few hours. So a small nanofactory could make another one twice as big in just a few days-maybe less than a day. Do that about 60 times, and you have a tabletop model.

By the time the first working assembler is ready, the blueprint for a basic nanofactory may already be prepared. But until we have an assembler, we can't make a nanofactory.

Building an assembler is one of the ambitious research projects of Zyvex, a Texas firm that bills itself as "the first molecular nanotechnology company." Zyvex has gathered many leading minds in physics, chemistry, mechanical engineering, and computer programming to focus on the long-range goal of molecular assembler manufacturing technology. Along the way, the company has developed some of the world's most precise tools for manipulating and testing materials and structures at the nanoscale. Numerous other projects at research universities and in corporations around the world are contributing valuable knowledge to the field.

How far are we from having a working assembler? A 1999 media report on nanotech said, "Estimates vary. From five to 10 years, according to Zyvex, or from eight to 15 years, according to the research community."

And how long will it take from building a single assembler to having a fully functional nanofactory? The report continues, "After that, it could be decades before we'll be able to manufacture finished consumer goods." This reflects the common wisdom, but it's wrong. Very wrong.

The Center for Responsible Nanotechnology (CRN), a nonprofit think tank co-founded by this author, published a detailed study in summer 2003 of the work required to progress from a single assembler to a full-fledged nanofactory that can create a wide variety of low-cost products. The startling conclusion of this report is that the span of time could be measured in weeks-probably less than two months. And what will the first nanofactory build? Another one, and another one.

Each nanofactory will be able to duplicate itself in as little as a few hours, or perhaps a half a week at most. Even using the most conservative estimate, in a couple of months you could have a million nanofactories, and a few months after that, a billion. Less than a year after the first basic assembler is completed, every household in the world conceivably could have its own nanofactory.

Creativity Unleashed

Before a tidal wave strikes, another dramatic event – usually an earthquake or major landslide – must occur to trigger it. The first generation of products to come out of nanofactories-inexpensive but high quality clothing, furniture, electronics, household appliances, bicycles, tools, building supplies, and more-may be like that: a powerful landslide of change, but only a portent of the gigantic wave that is to follow.

Most of these early products will probably be similar to what are current at the time nanofactories begin production. Because they are built by MNT, with every atom precisely placed, they will be better in every way-stronger, lighter, cheaper – but they still will be built on existing models.

The world-changing shock wave will hit when we realize that we no longer need be restricted to existing models – not when a supercomputer smaller than a grain of sand can be integrated into any product, and not when people everywhere-young, old, male, female, technical, nontechnical, practical, artistic, and whimsical – will have the opportunity to be designers.

MNT product design will be eased by CAD (computer-aided design) programs so simple that a child can do it – and that's no exaggeration. New product prototypes can be created, tested, and refined in a matter of hours instead of months and without the expense of traditional production facilities. No special expertise is needed beyond the skill for using CAD programs – only imagination, curiosity, and the desire to create.

Within months, conceivably, even the most up-to-date appliances, machines, communication media, and other electronics will be outmoded. Imagine embedding "smart" gadgetry into everything you own or might want to have. Demand for these new products will be intense. The cost of manufacturing them may be almost negligible.

To maximize the latent innovation potential in nanofactory proliferation, and to help prevent illicit, unwise, or malicious product design and manufacture, CRN recommends that designers work (and play) with modular nanoblocks of various compositions and purposes to create a wide variety of products, from consumer goods and educational tools to building supplies and even new modes of transportation. When combined with automated verification of design safety and protection of intellectual property, this should open up huge new areas for originality and improvement while maintaining safety and commercial viability.

Working with nanoblocks, designers can create to their hearts' content. The combination of user-friendly CAD and rapid prototyping will result in a spectacular synergy, enabling unprecedented levels of innovation and development. Among the many remarkable benefits accruing to humanity from nanofactory proliferation will be this unleashing of millions of eager new minds, allowed for the first time to freely explore and express their brilliant creative energy.

It becomes impossible to predict what might be devised then. The smart components and easy design systems of the nanotech revolution will rewrite the rules.

Benefits and Dangers

This all adds up to change that is sudden and shocking and could be extremely disruptive.

On the plus side, MNT could solve many of the world's problems. Simple products like plumbing, water filters, and mosquito nets-made cheaply on the spot-would greatly reduce the spread of infectious diseases. The efficient, cheap construction of strong and lightweight structures, electrical equipment, and power storage devices will allow the use of solar thermal power as a primary and abundant energy source.

Many areas of the world could not support a twentieth-century manufacturing infrastructure, with its attendant costs, difficulties, and environmental impacts, but MNT should be self-contained and clean. A single packing crate or suitcase could contain all the equipment required for a village-scale industrial revolution.

Computers and display devices will become stunningly inexpensive and could be made widely available. Much social unrest can be traced directly to material poverty, ill health, and ignorance. Nanofactories could greatly reduce these problems.

On the other hand, all this sudden change-the equivalent of a century's development packed into a few years-has the potential to disrupt many aspects of society and politics.

When a consumer purchases a manufactured product today, he is paying for its design, raw materials, the labor and capital of manufacturing, transportation, storage, marketing, and sales. Additional money-usually a fairly low percentage-goes to the owners of each of these businesses, and eventually to the employed workers. If nanofactories can produce a wide variety of products when and where they are wanted, most of this additional effort will become superfluous. This raises many questions about the nature of a post-MNT economy: Who will own the technology for molecular manufacturing? Will it be heavily restricted, or widely available? Will products become cheaper? Will major corporations disappear? Will new monopolies arise? Will most people retire-or be unemployed? What will it do to the gap between rich and poor?

It seems clear that molecular manufacturing could severely disrupt the present economic structure, greatly reducing the value of many material and human resources, including much of our current infrastructure. Despite utopian postcapitalist hopes, it is unclear whether a workable replacement system could appear in time to prevent the human consequences of massive job displacement.

MNT manufacturing will allow the cheap creation of incredibly powerful devices and products. Stronger materials will allow the creation of much larger machines, capable of excavating or otherwise destroying large areas of the planet at a greatly accelerated pace. It is too early to tell whether there will be economic incentive to do this. However, given the large number of activities and purposes that would damage the environment if taken to extremes, and the ease of taking them to extremes with molecular manufacturing, it seems likely that this problem is worth worrying about.

Some forms of damage can result from an aggregate of individual actions, each almost harmless by itself. For example, the extreme compactness of nanomanufactured machinery may lead to the use of very small products, which can easily turn into nanolitter that will be hard to clean up and may cause health problems. Collection of solar energy on a sufficiently large scale-by corporations, municipalities, and individuals-could modify the planet's albedo and directly affect the environment. In addition, if we are not careful, the flexibility and compactness of molecular manufacturing may allow the creation of free-floating, foraging self-replicators-a "gray goo" that could do serious damage to the biosphere by replicating out of control.

Molecular manufacturing raises the possibility of horrifically effective weapons. As an example, the smallest insect is about 200 microns; this creates a plausible size estimate for a nanotech-built antipersonnel weapon capable of seeking and injecting toxin into unprotected humans. The human lethal dose of botulism toxin is about 100 nanograms, or about 1/100 the volume of the weapon. As many as 50 billion toxin-carrying devices-theoretically enough to kill every human on earth-could be packed into a single suitcase. Guns of all sizes would be far more powerful, and their bullets could be self-guided. Aerospace hardware would be far lighter and offer higher performance; built with minimal or no metal, such craft would be much harder to spot on radar.

The awesome power of MNT may cause two or more competing nations to enter into an unstable arms race. Increased uncertainty of the capabilities of an adversary, less time to respond to an attack, and better targeted destruction of the enemy's resources during an attack all make nanotech arms races less stable than a nuclear arms race. Also, unless nanotech is tightly controlled on an international level, the number of nanotech nations in the world could be much higher than the number of nuclear nations, increasing the chance of a regional conflict expanding globally.

Criminals and terrorists with stronger, more powerful, and more compact devices could do serious damage to society. Chemical and biological weapons could become much deadlier and easier to conceal. Many other types of terrifying devices are possible, including several varieties of remote assassination weapons that would be difficult to detect or avoid. If such devices were available from a black market or a home factory, it would be nearly impossible to detect them before they were used; a random search capable of spotting them would be a clear violation of current human rights standards in most civilized countries.

Surveillance devices could be made microscopically small, low-priced, and very numerous-leading to questions of pervasive invasions of privacy, from illicit selling of sexual or other images to ubiquitous covert government or industrial spying. Attempts to control all these risks may lead to abusive restrictions, or create a black market that would be very risky and almost impossible to stop, because small nanofactories will be very easy to smuggle and fully dangerous.

Searching for Solutions

If you knew that in one year's time you would be forced to walk a tightrope without a net hundreds of feet above a rocky canyon, how soon would you begin practicing? The analogy applies to nanofactory technology. Because we know it is possible-maybe even probable-that everything we've reviewed here could happen within a decade, how soon should we start to prepare?

A report issued by the University of Toronto Joint Centre for Bioethics in February 2003 calls for serious consideration of the ethical, environmental, economic, legal, and social implications of nanotechnology. Report co-author Peter Singer says, "Open public discussion of the benefits and risks of this new technology is urgently needed."

There's no doubt that such discussion is warranted and urgent. But beyond talking about ethics, immediate research into the need, design, and building of an effective global administration structure is crucial. Unwise regulation is a serious hazard. Simple solutions won't work.

"A patchwork of extremist solutions to the wide-ranging risks of advanced nanotechnology is a grave danger," says Chris Phoenix, research director for the Center for Responsible Nanotechnology. "All areas of society stand to be affected by molecular manufacturing, and unless comprehensive international plans are developed, the multiplicity of cures could be worse than the disease. The threat of harm would almost certainly be increased, while many extraordinary benefits could go unrealized."

We have much to gain, and much to lose. The advantages promised by MNT are real, and they could be ours soon. Living conditions worldwide could be dramatically improved, and human suffering greatly diminished. But everything comes at a cost. The price for safe introduction of the miracles of nanofactory technology is thorough, conscientious preparation.

Several organizations are stepping up to this challenge. For example:

* The Foresight Institute has drafted a set of molecular nanotechnology guidelines for researchers and developers. These are mostly aimed at restricting the development of MNT to responsible parties and preventing the production of free-ranging self-replicating nanobots.

* The Millennium Project of the American Council for the United Nations University is exploring various scenarios for safe and socially conscious implementation of molecular manufacturing and other emerging technologies. These scenarios depict the world in 2050, based on various policy choices we might make between now and then.

* The Center for Responsible Nanotechnology is studying all the issues involved-political, economic, military, humanitarian, technological, and environmental-and developing well-grounded, complete, and workable proposals for effective administration and safe use of advanced nanotechnology. Current results of CRN's research lead to the conclusion that establishing a single international program to develop molecular manufacturing technology may be the safest course. The leading nations of the world would have to agree to join-or at least not to oppose-this effort, and a mechanism to detect and deter competing programs would have to be devised.

It will take all this and more. The brightest minds and clearest thinkers, the most energetic activists and committed organizers, the smartest scientists, most dedicated ethicists, and most creative social planners desperately will be needed.

Will it be easy to realize the benefits of nanofactory technology while averting the dangers? Of course it will not. Is it even possible? It had better be. Our future is very uncertain, and it's very near. Much nearer than we might have thought. Let's get started.

Tuesday, February 07, 2006

More then human




http://www.morethanhuman.org/contents/chapter1.htm

Excerpt from Chapter 1 - Choosing Our Bodies

In 1989, Raj and Van DeSilva were desperate. Their daughter Ashanti, just four, was dying. She was born with a crippled immune system, a consequence of a problem in her genes.

Every human being has around thirty thousand genes. In fact, we have two copies of each of those genes--one inherited from our mother, the other from our father. Our genes tell our cells what proteins to make, and when.


Each protein is a tiny molecular machine. Every cell in your body is built out of millions of these little machines, working together in precise ways. Proteins break down food, ferry energy to the right places, and form scaffoldings that maintain cell health and structure. Some proteins synthesize messenger molecules to pass signals in the brain, and other proteins form receptors to receive those signals. Even the machines inside each of your cells that build new proteins—called ribosomes—are themselves made up of other proteins.

Ashanti DeSilva inherited two broken copies of the gene that contains the instructions for manufacturing a protein called adenoside deaminase (ADA). If she had had just one broken copy, she would have been fine. The other copy of the gene would have made up the difference. With two broken copies, her body didn’t have the right instructions to manufacture ADA at all.

ADA plays a crucial role in our resistance to disease. Without it, special white blood cells called T cells die off. Without T cells, ADA-deficient children are wide open to the attacks of viruses and bacteria. These children have what’s called severe combined immune deficiency (SCID) disorder, more commonly known as bubble boy disease.

To a person with a weak immune system, the outside world is threatening. Everyone you touch, share a glass with, or share the same air with is a potential source of dangerous pathogens. Lacking the ability to defend herself, Ashanti was largely confined to her home.

The standard treatment for ADA deficiency is frequent injections of PEG-ADA, a synthetic form of the ADA enzyme. PEG-ADA can mean the difference between life and death for an ADA-deficient child. Unfortunately, although it usually produces a rapid improvement when first used, children tend to respond less and less to the drug each time they receive a dose. Ashanti DeSilva started receiving PEG-ADA injections at the age of two, and initially she responded well. Her T-cell count rose sharply and she developed some resistance to disease. But by the age of four, she was slipping away, no longer responding strongly to her injections. If she was to live, she’d need something more than PEG-ADA. The only other option at the time, a bone-marrow transplant, was ruled out by the lack of matching donors.

In early 1990, while Ashanti’s parents were searching frantically for help, French Anderson, a geneticist at the National Institutes of Health, was seeking permission to perform the first gene-therapy trials on humans. Anderson, an intense fifth-degree blackbelt in tae kwon do and respected researcher in the field of genetics, wanted to show that he could treat genetic diseases caused by faulty copies of genes by inserting new, working copies of the same gene.

Scientists had already shown that it was possible to insert new genes into plants and animals. Genetic engineering got its start in 1972, when geneticists Stanley Cohen and Herbert Boyer first met at a scientific conference in Hawaii on plasmids, small circular loops of extra chromosomal DNA in which bacteria carry their genes. Cohen, then a professor at Stanford, had been working on ways to insert new plasmids into bacteria. Researchers in Boyer’s lab at the University of California in San Francisco had recently discovered restriction enzymes, molecular tools that could be used to slice and dice DNA at specific points.

Over hot pastrami and corned-beef sandwiches, the two Californian researchers concluded that their technologies complemented one another. Boyer’s restriction enzymes could isolate specific genes, and Cohen’s techniques could then deliver them to bacteria. Using both techniques researchers could alter the genes of bacteria. In 1973, just four months after meeting each other, Cohen and Boyer inserted a new gene into the Escherichia coli bacterium (a regular resident of the human intestine).

For the first time, humans were tinkering directly with the genes of another species. The field of genetic engineering was born. Boyer would go on to found Genentech, the world’s first biotechnology company. Cohen would go on to win the Nobel Prize in 1986 for his work on cell growth factors.

Building on Cohen and Boyer’s work with bacteria, hundreds of scientists went on to find ways to insert new genes into plants and animals. The hard work of genetically engineering these higher organisms lies in getting the new gene into the cells. To do this, one needs a gene vector—a way to get the gene to the right place. Most researchers use gene vectors provided by nature: viruses. In some ways, viruses are an ideal tool for ferrying genes into a cell, because penetrating cell walls is already one of their main abilities. Viruses are cellular parasites. Unlike plant or animal cells, or even bacteria, viruses can’t reproduce themselves. Instead, they penetrate cells and implant their viral genes; these genes then instruct the cell to make more of the virus, one protein at a time.

Early genetic engineers realized that they could use viruses to deliver whatever genes they wanted. Instead of delivering the genes to create more virus, a virus could be modified to deliver a different gene chosen by a scientist. Modified viruses were pressed into service as genetic “trucks,” carrying a payload of genes loaded onto them by researchers; these viruses don’t spread from cell to cell, because they don’t carry the genes necessary for the cell to make new copies of the virus.

By the late 1980s, researchers had used this technique to alter the genes of dozens of species of plants and animals—tobacco plants that glow, tomatoes that could survive freezing, corn resistant to pesticides. French Anderson and his colleagues reasoned that one could do the same in a human being. Given a patient who lacked a gene crucial to health, one ought to be able to give that person copies of the missing gene. This is what Anderson proposed to do for Ashanti.

Starting in June of 1988, Anderson’s proposed clinical protocols, or treatment plans, went through intense scrutiny and generated more than a little hostility. His first protocol was reviewed by both the National Institutes of Health (NIH) and the Food and Drug Administration (FDA). Over a period of seven months, seven regulatory committees conducted fifteen meetings and twenty hours of public hearings to assess the proposal.

In early 1990, Anderson and his collaborators received the final approval from the NIH’s Recombinant DNA Advisory Committee and had cleared all legal hurdles. By spring, they had identified Ashanti as a potential patient. Would her parents consent to an experimental treatment? Of course there were risks to the therapy, yet without it Ashanti would face a life of seclusion and probably death in the next few years. Given these odds, her parents opted to try the therapy. As Raj DeSilva told the Houston Chronicle, “What choice did we have?”

Ashanti and her parents flew to the NIH Clinical Center at Bethesda, Maryland. There, over the course of twelve days, Anderson and his colleagues Michael Blaese and Kenneth Culver slowly extracted some of Ashanti’s blood cells. Safely outside the body, the cells had new, working copies of the ADA gene inserted into them by a hollowed-out virus. Finally, starting on the afternoon of September 14, Culver injected the cells back into Ashanti’s body.

The gene therapy had roughly the same goal as a bone-marrow transplant—to give Ashanti a supply of her own cells that could produce ADA. Unlike a bone-marrow transplant, gene therapy carries no risk of rejection. The cells Culver injected back into Ashanti’s bloodstream were her own, so her body recognized them as such.

The impact of the gene therapy on Ashanti was striking. Within six months, her T-cell count rose to normal levels. Over the next two years, her health continued to improve, allowing her to enroll in school, venture out of the house, and lead a fairly normal childhood.

Ashanti is not completely cured—she still takes a low dose of PEG-ADA. Normally the dose size would increase with the patient’s age, but her doses have remained fixed at her four-year-old level. It’s possible that she could be taken off the PEG-ADA therapy entirely, but her doctors don’t think it’s yet worth the risk. The fact that she’s alive today—let alone healthy and active—is due to her gene therapy, and also helps prove a crucial point: genes can be inserted into humans to cure genetic diseases.

From Healing to Enhancing

After Ashanti’s treatment, the field of gene therapy blossomed. Since 1990, hundreds of labs have begun experimenting with gene therapy as a technique to cure disease, and more than five hundred human trials involving over four thousand patients have been launched. Researchers have shown that it may be possible to use gene therapy to cure diabetes, sickle-cell anemia, several kinds of cancer, Huntington’s disease and even to open blocked arteries.

While the goal of gene therapy researchers is to cure disease, gene therapy could also be used to boost human athletic performance. In many cases, the same research that is focused on saving lives has also shown that it can enhance the abilities of animals, with the suggestion that it could enhance men and women as well.

Consider the use of gene therapy to combat anemia. Circulating through your veins are trillions of red blood cells. Pumped by your heart, they serve to deliver oxygen from the lungs to the rest of your tissues, and carry carbon dioxide from the tissues back out to the lungs and out of the body. Without enough red blood cells, you can’t function. Your muscles can’t get enough oxygen to produce force, and your brain can’t get enough oxygen to think clearly. Anemia is the name of the condition of insufficient red blood cells. Hundreds of thousands of people worldwide live with anemia, and with the lethargy and weakness that are its symptoms. In the United States, at least eighty-five thousand patients are severely anemic as a result of kidney failure. Another fifty thousand AIDS patients are anemic due to side effects of the HIV drug AZT.

In 1985, researchers at Amgen, a biotech company based in Thousand Oaks, California, looking for a way to treat anemia isolated the gene responsible for producing the growth hormone erythropoietin (EPO). Your kidneys produce EPO in response to low levels of oxygen in the blood. EPO in turn causes your body to produce more red blood cells. For a patient whose kidneys have failed, injections of Amgen’s synthetic EPO can take up some of the slack. The drug is a lifesaver, so popular that the worldwide market for it is as high as $5 billion per year, and therein lies the problem: the cost of therapy is prohibitive. Three injections of EPO a week is a standard treatment, and patients who need this kind of therapy end up paying $7,000 to $9,000 a year. In poor countries struggling even to pay for HIV drugs like AZT, the added burden of paying for EPO to offset the side effects just isn’t feasible.

What if there was another way? What if the body could be instructed to produce more EPO on its own, to make up for that lost to kidney failure or AZT? That’s the question University of Chicago professor Jeffrey Leiden asked himself in the mid-1990s. In 1997, Leiden and his colleagues performed the first animal study of EPO gene therapy, injecting lab monkeys and mice with a virus carrying an extra copy of the EPO gene. The virus penetrated a tiny proportion of the cells in the mice and monkeys and unloaded the gene copies in them. The cells began to produce extra EPO, causing the animals’ bodies to create more red blood cells. In principle, this was no different from injecting extra copies of the ADA gene into Ashanti, except in this case the animals already had two working copies of the EPO gene. The one being inserted into some of their cells was a third copy; if the experiment worked, the animals’ levels of EPO production would be boosted beyond the norm for their species.

That’s just what happened. After just a single injection, the animals began producing more EPO, and their red-blood-cell counts soared. The mice went from a hematocrit of 49 percent (meaning that 49 percent of their blood volume was red blood cells) to 81 percent. The monkeys went from 40 percent to 70 percent. At least two other biotech companies, Chiron and Ariad Gene Therapies, have produced similar results in baboons and monkeys, respectively.

The increase in red-blood-cell count is impressive, but the real advantage of gene therapy is in the long-lasting effects. Doctors can produce an increase in red-blood-cell production in patients with injections of EPO itself—but the EPO injections have to be repeated three times a week. EPO gene therapy, on the other hand, could be administered just every few months, or even just once for the patient’s entire lifetime.

The research bears this out. In Leiden’s original experiment, the mice each received just one shot, but showed higher red-blood-cell counts for a year. In the monkeys, the effects lasted for twelve weeks. The monkeys in the Ariad trial, which went through gene therapy more than four years ago, still show higher red-blood-cell counts today.

This is a key difference between drug therapy and gene therapy. Drugs sent into the body have an effect for a while, but eventually are broken up or passed out. Gene therapy, on the other hand, gives the body the ability to manufacture the needed protein or enzyme or other chemical itself. The new genes can last for a few weeks or can become a permanent part of the patient’s genome.

The duration of the effect depends on the kind of gene vector used and where it delivers its payload of DNA. Almost all of the DNA you carry is located on twenty-three pairs of chromosomes that are inside the nuclei of your cells. The nucleus forms a protective barrier that shields your chromosomes from damage. It also contains sophisticated DNA repair mechanisms that patch up most of the damage that does occur.

Insertional gene vectors penetrate all the way into the nucleus of the cell and splice the genes they carry into the chromosomes. From that point on, the new genes get all the benefits your other genes enjoy. The new genes are shielded from most of the damage that can happen inside your cells. If the cell divides, the new genes get copied to the daughter cells, just like the rest of your DNA. Insertional vectors make more or less permanent changes to your genome.

Noninsertional vectors, on the other hand, don’t make it into the nucleus of your cells. They don’t splice the new genes they carry into your chromosomes. Instead, they deliver their payload of DNA and leave it floating around inside your cells. The new DNA still gets read by the cell. It still instructs the cell to make new proteins. But it doesn’t get copied when the cell divides. Over time, it suffers from wear and tear, until eventually it breaks up, and its effects end.

The difference in durations among drugs, noninsertional vectors and insertional vectors gives us choices. We can choose to make a temporary change with a drug, which will wear off in a few hours or days; a semipermanent change with noninsertional gene therapy, whose effects will last for weeks or months depending on the genes and type of cell infected; or a permanent change by inserting new genes directly into your genome. Each of these three options is appropriate in certain situations. In the context of EPO, the idea of semipermanent or permanent change by means of gene therapy has definite advantages. It cuts down on the need for frequent injections, which means that the gene therapy approach can end up being much cheaper than the drug therapy approach.

....

Excerpted from More Than Human by Ramez Naam Copyright © 2005 by Ramez Naam . Excerpted by permission of Broadway, a division of Random House, Inc. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Friday, February 03, 2006

Websites

http://biomech.media.mit.edu/

Note:do not log on the xteamblackx to post...sign up your own account....do what the e-mail said...thanks

Global Aging report

http://www.aarp.org/research/international/gra/gra_fall_2005/index.html

Looking Abroad to Meet the Demands for Caregivers

By Ron Hoppe, COO and Co-founder of WorldWide HealthStaff Associates Ltd.
photo: Ron Hoppe

As the American population ages, the need for qualified professional caregivers is increasing as well. However, what are organizations to do that provide long-term care when their nursing workforce is also aging? Not only is the nursing workforce aging but it is doing so at twice the rate of the general working age population.

Starting in the latter 1990’s some long-term care providers in the US started to experience chronic nursing vacancies as traditional recruitment methods were no longer attracting a sufficient number of qualified nurses. Recruitment strategies were soon bolstered by expanded retention programs designed to keep existing nurses in the workforce longer and to attract those that had left the profession back to work, if only on a part-time basis.

At the beginning of the 21st century long-term care providers continued to grapple with an increasing shortage of nurses while at the same time that planning was taking place for expanded facilities and programs to meet the burgeoning demand for services. These factors, combined with workforce data that was now projecting a sharp increase in the number of nurses retiring, motivated some to seriously explore the viability of international recruitment.

Long-term care employers discovered that there was an abundant supply of highly educated and skilled nurses in a number of countries, especially those with emerging economies such as the Philippines and India. Employers also found that there were some nurses in other developed countries, albeit in relatively small numbers, such as Canada, United Kingdom and Australia who were also interested in working and living in the US.

However in order for international recruitment to be included in an employers overall staffing plans, they needed to develop strategies to meet the stringent US Registered Nurse licensure requirements and the equally stringent US immigration requirements.
"Long-term care employers discovered that there was an abundant supply of highly educated and skilled nurses in a number of countries, especially those with emerging economies such as the Philippines and India."

International recruitment efforts were focused primarily on countries where:

* education standards for nursing were recognized as being equivalent to US standards;
* there was a general level of English language proficiency;
* there was some history of immigration to the US, and;
* there was a sufficient supply of nurses that could be recruited without devastating the workforce of the nurse’s home country.

The application of these criteria resulted in the Philippines and India as being the primary countries in which international recruitment activities were undertaken.

While international recruitment was providing additional nurses to the long-term care workforce, employers realized that this strategy was not without some risks. The aftermath of the terrorist attacks on the US in 2001, changes to licensure rules and immigration regulations and sometimes lengthy immigration processing times all contributed to a process that was more complex than some employers had anticipated.

Employers who achieve the greatest success in recruiting internationally include a number of key elements in their strategies. These include:

* a strategic vision and commitment to the process;
* communication with existing staff throughout the international recruitment process to ensure organizational buy-in and support;
* contracting competent professional help to manage all aspects of the process;
* mentorship programs and structured orientations for nurses on their arrival in the US that specifically address the practice differences the nurse will encounter; and
* a clear understanding of the acculturation issues that the international nurse faces in integrating into the US long-term care workforce and into US society in general.

Concerns have been raised regarding the impact of international recruitment on the nurse’s home country. While these concerns are valid and need to be taken seriously when considering recruiting in some developing countries, countries such as the Philippines and India have a long history of purposely training nurses in numbers well beyond their domestic requirements specifically for employment abroad. Nursing curriculum in these countries has been developed with careful consideration of the US standards.
international recruitment

Also, the countries from which nurses have been recruited are receiving several significant benefits. Once nurses are employed in the US they tend to return a portion of their earnings to their home country specifically to support their extended families and some of the nurses return to their home country and are able to transfer the knowledge and skills they have gained in the US by assuming leadership positions in academia and in hospitals.

Although the nursing shortage eased slightly in 2004 as the result of domestic and international initiatives, US production of new graduates remains less than the current demand and the number of older nurses continues to increase at three times faster than younger workers.

Overall the international recruitment of nurses is having a beneficial impact on the delivery of long-term care services in the US. International nurses help to ensure that there are an appropriate number of nurses available to meet the demands of an aging population. With current US workforce and demographic studies predicting that the greatest shortage of nurses still lies ahead, international recruitment will remain as an important element of the staffing plans of many long-term care providers.

Aging in Canada

Aging Everywhere
Canada

Canada

Reports/Speeches
Links to Governmental Organizations
Links to Non-Governmental Organizations
Quick Facts

Total Population (in millions)—32.5
Rank by Population—35th
Men Women
Life expectancy at birth (in years) 76.4 83.4
Median Age 36.9 38.8
Total Fertility Rate 1.61 children born/woman
Percentage of Population Aged 60+ 16 19
Percentage of 60+ Population in Labour Force 19 8
Statutory Retirement Age
Note; Statutory retirement age is the age at which person is eligible to receive state pension. 65 65

Sources: CIA World Factbook,(2004); United Nations Population Division, DESA(2004)

Contributors

Finally i learn how to add members to the blog....a bit late tho...lol

this is what you do....

i sent a e-mail to you to invite....create a username and accept the invitation...then use that account to post stuff on the the site...there for we'll know who posted what.....we have to post all the stuff we found very very soon....

The project is due this week.... :(