buy generic propecia
 

Poverty

Scholarly as well as ideological debate has long centered around the most elementary questions concerning poverty. What is poverty? How can it be measured? What causes it? Is it a natural phenomenon or a symptom of a poorly ordered society? Though answers to all these questions abound, there is no definitive answer to any one of them, nor can there ever be, for the questions are not purely demographic, but moral, ethical, and political as well. Poverty is a concept, not a fact, and must be understood as such. Even though no definitive answers are possible, this does not mean that all answers are thereby equal; many are based on ignorant assumptions and ill-formed judgments. Sociologists involved in poverty research seek to make sure that all understand the meaning and consequences of various points of view, and that both theoretical and policy research is based soundly upon clear definitions and reliable data.

Even the definition of ‘‘poverty’’ is problematic. The word is derived from the French pauvre, meaning ‘‘poor.’’ Poverty is simply the state of lacking material possessions, of having little or no means to support oneself. All would agree that anyone lacking the means necessary to remain alive is in poverty, but beyond that there is little agreement. Some scholars and policy makers would draw the poverty line at the bare subsistence level, like Rowntree’s ‘‘the minimum necessaries for the maintenance of merely physical efficiency’’ (1901, p. viii). Others argue for poverty definitions that include persons whose level of living is above subsistence but who have inadequate means; among those holding to the latter, further arguments concern the definition of adequacy. Social science cannot resolve the most basic arguments. For example, the level of living implied by the poverty threshold in the United States would be seen as desirable and unattainable in many other countries, yet few would suggest that poverty in the United States be defined by such outside standards. Sociologists can evaluate the demographic and economic assumptions underlying standards of poverty, but not the standards themselves.

Conceptions of Poverty

Poverty can be defined in absolute or relative terms. The subsistence line is a good example of an absolute definition (i.e., below this line one does not have sufficient resources to survive). A criterion based on some arbitrary formula, such that poverty equals some fraction of the median income or below, is a good example of a relative definition (e.g., ‘‘All persons earning less than 25 percent of the median income are poor’’). In all industrial societies an absolute definition will have far fewer persons officially in poverty than will a relative definition, creating natural political pressure for absolute definitions. For example, a study in 1976 revealed that if poverty was defined as having 50 percent of the median income, data on income distributions would show that an unchanging 19 percent of the population had been poor for almost the past two decades (U.S. DHEW 1976). Absolute definitions show declines in poverty over time in industrial nations. There are valid arguments for both types of definitions. Some argue that relative definitions of poverty render the term meaningless in affluent societies, and make cross-national comparisons difficult—for example, in an advanced industrial society, 50 percent of national median income could leave one adequately provided for, while the same percentage in many less industrialized societies would not provide basic necessities to sustain life. On the other hand, within societies there is evidence that most people see poverty in relative terms rather than as an absolute standard (Rainwater 1974; Kilpatrick 1973). That is, popular conceptions of what level of living constitutes poverty have been found to change as general affluence goes up and down. Advocates of relative measures point out that any absolute measure is arbitrary and thus meaningless. A reasonable definition of the poor, they argue, should be one that demarcates the lower tail of the income distribution as the poor, whatever the absolute metric represented by that tail, for those persons will be poor by the standards of that time and place. As the average level of income rises and falls, they argue, what is seen as poverty will, and should, change. Advocates of absolute measures of poverty do not deny that perception of poverty is intimately tied to distributional inequality, but argue that relative definitions are too vague for policy purposes. An absolute standard, defined on some concrete level of living, is a goal that can possibly be attained. Once it is attained, they say, a new goal could be set. Eliminating poverty as defined by relative standards is a far more difficult goal, both practically and politically. T. H. Marshall noted, ‘‘the question of what range of inequality is acceptable above the ‘poverty line’ can only marginally, if at all, be affected by or affect the decision of where that line should be drawn’’ (1981, p. 52).

Relative versus absolute poverty is a distributional distinction, but there are other important distinctions as well. A social distinction, and one with considerable political import, is usually made between the ‘‘deserving poor’’ and the ‘‘undeserving poor.’’ In their brief summary of the historical origins of this distinction, Morris and Williamson (1986, pp. 6–12) maintain that it became significant in the fourteenth century, when, for a variety of reasons (the decline of feudalism, the rise of a market economy with concomitant periodic labor dislocations, bubonic plague–induced regional labor shortages), the poor became geographically mobile for the first time. Before that, the local Catholic parish, with its ‘‘Blessed are the poor’’ theology, was the primary caretaker of the indigent. Mobility caused an increase in the number of able-bodied individuals needing temporary assistance, and troubles arising from their presence contributed to a growing antipathy toward the able-bodied poor.

Katz (1989) also traces the origins of the ‘‘undeserving poor’’ in part to demographic factors. He points out that, prior to the twentieth century, poverty was a seemingly unalterable fact of life, and most people would spend their lives in it. Thus no moral taint was attached to poverty. The only policy question usually involved the locus of responsibility for aid, and the answer was a simple one: Responsibility was local, and those needy persons not belonging to the community could be ‘‘resettled.’’ Increased population mobility made the settlement provisions unworkable, and the original distinction between the genuinely needy and ‘‘rogues, vagabonds, and sturdy beggars’’ hardened into a moral distinction between the poor, who needed no public relief, and ‘‘paupers,’’ those needing assistance because of personal failings (Katz 1989, pp. 12–14).

Feagin (1975, chap. 2) locates the origins of negative attitudes toward the poor in the Protestant Reformation. Under Protestantism, he notes, the ‘‘work ethic’’—the ideology of individualism—became a central tenet of the Western belief system. Poverty, in the extreme Calvinist version of this viewpoint, is largely a consequence of laziness and vice, and can even be regarded as just punishment from a righteous God. The rise of Puritan thought contributed to the increasing disfavor with which the unemployed and destitute were regarded. It became a matter of faith that poverty was individually caused and must thereby be individually cured. These ideas became secularized, and programs to aid the poor thereafter focused on curing the individual faults that led to poverty: Potential problems in the structure of society that caused unemployment and underemployment were not to be scrutinized in search of a solution. The notion of poverty continues to be in flux. As Marshall (1981) noted, the concept has been with us since antiquity, but its meaning has not been constant through the ages.

Most sociologists today distinguish among three major types of explanations of poverty: individual, situational, and structural theories. Individual theories attribute the primary cause of poverty to individual failings or, more neutrally, to individual differences—the central argument being that the poor are different from the nonpoor in some significant way. Situational theories agree that the poor are different from the nonpoor, but argue that the differences are a result of poverty, not a cause of it. Structural theories see differences in individual attributes as irrelevant, and argue that poverty has only societal-level origins: The characteristics of economic systems create poverty, not the characteristics of individuals.

Theory and Policy

The epitome of the individual viewpoint in the social sciences was the once-dominant ‘‘culture of poverty’’ explanation for destitution. Oscar Lewis (1961, 1966) is usually credited with this idea, which sees poverty not only as economic deprivation, or the absence of something, but also as a way of life, the presence of specific subcultural values and attitudes passed down from generation to generation. Lewis saw the structure of life among the poor as functional, a set of coping mechanisms without which the poor could not survive their harsh circumstances. But there were negative consequences of the value system as well, he noted. Family life was disorganized, there was an absence of childhood as a prolonged lifecycle stage, a proliferation of consensual marriages, and a very high incidence of spouse and child abandonment—all of which left individuals unprepared and unable to take advantage of opportunities. Exacerbating the problem, the poor were divorced from participation in and integration into the major institutions of society, leading to constant hostility, suspicion, and apathy. Many have maintained that the culture-of-poverty viewpoint dovetailed perfectly with a politically liberal view of the world. It blamed the poor as a group for their poverty, but held no single person individually responsible, nor did it blame the structure of the economy or the society. This view of the poor led to antipoverty policies directed at changing the attitudes and values of those in poverty, so that they could ‘‘break out’’ of the dysfunctional cultural traits they had inherited. It led political liberals and radicals to attempts to ‘‘organize the poor.’’ Political conservatives transformed the explanation into one that held the poor more culpable individually and the problem into one that was intractable—‘‘benign neglect’’ being then the only sensible solution (Banfield 1958, 1970; Katz 1989). There were many problems with the culture-of-poverty explanation. Most serious was the fact that the cultural scenario simply doesn’t fit. Only a minority of the poor are poor throughout their lives; most move in and out of poverty. Also, a substantial proportion of those in poverty are either women with children who fell into poverty when abandoned by a spouse, or the elderly who became poor when their worklives ended: Neither event could be explainable by the culture of the class of destination. Many studies falsified specific aspects of the culture-of-poverty thesis (for a review, see Katz 1989, pp. 41 ff.), and Hyman Rodman’s influential notion of the ‘‘lower-class value stretch’’ (1971) offered an alternative explanation (the poor actually share mainstream values, but must ‘‘stretch’’ them to fit their circumstances—remove the poverty, and they fit neatly into dominant culture—attempts to alter their ‘‘culture’’ are unnecessary, and meaningless, since ‘‘culture’’ is not the problem). Nonetheless, the culture-of-poverty thesis was (and to some extent still is) a very popular explanation for poverty. This is probably so in part because it fits so well the individualistic biases of most Americans. Surveys of attitudes toward poverty have shown that most persons prefer ‘‘individualistic’’ explanations of poverty, which place the responsibility for poverty primarily on the poor themselves. A minority of Americans subscribe to ‘‘structural’’ explanations that blame external social and economic factors, and this minority consists largely of the young, the less educated, lower income groups, nonwhites, and Jews (Feagin 1975, p. 98).

A more sophisticated recent treatment incorporating some of the explanatory power of the culture of poverty argument without that theory’s untenable assumptions is William Wilson’s depiction of the ‘‘underclass’’ (1987). This recent work by Wilson on the underclass has been criticized by some as a return to classical culture of poverty theory in a new guise. It is not, of course, and represents a very different type of explanation. Wilson defines the underclass as an economically disadvantaged group whose marginal economic position and weak attachment to the labor force is ‘‘uniquely reinforced by the neighborhood or social milieu’’ (Wilson 1993, p. 23). Changes in the geography of employment, industrial specialization, and other factors have resulted in a rise in joblessness among urban minorities, which has in turn led to an increase in other social dislocations. Primary among those other factors has been the steady out-migration of working-class and middleclass families from the inner cities, which groups would normally provide a social buffer. Wilson notes: ‘‘in a neighborhood with a paucity of regularly employed families and with the overwhelming majority of families having spells of long-term joblessness, people experience a social isolation that excludes them from the job network system that permeates other neighborhoods and that is so important . . . other alternatives such as welfare . . . are not only increasingly relied on, they come to be seen as a way of life’’ (1987, p. 57). The 1990 U.S. Census showed that about 15 percent of the poor lived in neighborhoods where the poverty rate was at least 40 percent (O’Hare 1996). In these neighborhoods, where few are likely to have resources or job networks, reside Wilson’s ‘‘underclass.’’ Unlike the culture-of-poverty theory, Wilson’s theory analyzes the structural and cultural resources of poor places, rather than the socialized attitudes and values of poor people. Wilson contends that ‘‘ghetto-specific cultural traits’’ are relevant in understanding the behavior of inner-city poor people, but these traits, contra culture-of-poverty theory, do not have a life of their own. That is, they are an effect of deprivation and social isolation, not a cause of it, and they command very little commitment—they are not self-perpetuating. ‘‘Social isolation,’’ Wilson’s key concept, implies not differential socialization of the inner-city poor, but rather their lack of cultural resources supporting the desirability and possibility of achieving culturally normative aspirations. The individual characteristics normally associated with a culture of poverty argument represent expected and even rational responses to adverse environments, not socialized belief systems.

While Wilson’s work has been very influential in the social sciences, one of the most politically influential recent works on poverty policy has been that of Murray (1984). Murray argued that the viewpoint that individuals ultimately cause their own poverty changed in the 1960s to the viewpoint that the structure of society was ultimately responsible. This alteration in the intellectual consensus, which freed the poor from responsibility for their poverty, was fatally misguided, he argues, and caused great damage to the poor. Despite enormously increased expenditures on social welfare from 1965 on, he maintains, progress against poverty ceased at that point. In the face of steadily improving economic conditions, the period 1965–1980 was marked by increases in poverty, family breakdown, crime, and voluntary unemployment. Murray argues that this occurred precisely because of the increased expenditures on social welfare, not despite them. Work incentive declined during these years because of government policies that rewarded lack of employment and nonintact family structure. It is a standard economic principle that any activity that is subsidized will tend to increase. Murray’s arguments have had policy impact, but have been subject to extensive criticism by students of the field.

As evidence of the disincentive to work brought about by social welfare payments, Murray cites the Negative Income Tax (NIT) experiments. These were large social experiments designed to assess the effects of a guaranteed income. The first NIT experiment was a four-year study in New Jersey from the late 1960s to the early 1970s. In this study, 1,375 intact ‘‘permanently poor’’ families were selected, and 725 of them were assigned to one of eight NIT plans. It was found that the reduction in labor-market activity for males caused by a guaranteed income was not significant, but that there were some reductions for females (5–10 percent of activity), most of which could be explained by the substitution of labor-market activity for increased child care (home employment). In a larger NIT study conducted throughout the 1970s, the Seattle-Denver Income Maintenance Experiment (usually referred to in the literature as the SIME-DIME study), much larger work disincentives were found, about 10 percent for men, 20 percent for their spouses, and up to 30 percent for women heading single-family households (see Haveman 1987, chap. 9, for an excellent summary of the many NIT experiments). Murray offered these findings as evidence that existing welfare programs contributed to poverty by creating work disincentives. Cain (1985) pointed out that the experiments provided much higher benefits than existing welfare programs, and also noted that, given the low pay for women at that level, the 20 percent reduction for wives would have a trivial effect on family income. If it resulted in a proportionate substitution of work at home, the reduction could actually lead to an improvement in the lives of the poor. Commentators have presented arguments against almost every point made by Murray, insisting that either his measures or his interpretations are wrong. For example, Murray’s measure of economic growth, the gross national product (GNP), did increase throughout the 1970s, but real wages declined, and inflation and unemployment increased—poverty was not increasing during good times, as he argues. His other assertions have been similarly challenged (for summaries, see McLanahan et al. 1985; Katz 1989, chap. 4), but though his arguments and empirical findings simply do not stand up to close scrutiny, the broad viewpoint his work represents remains important in policy deliberations, probably because they offer pseudoscientific support for the biases of many.

Measures of Poverty

In the United States, official poverty estimates are based on the Orshansky Index. The index is named for Mollie Orshansky of the Social Security Administration, who first proposed it (Orshansky 1965). It is an absolute poverty measure, based on the calculated cost of food sufficient for an adequate nutritional level and on the assumption that persons must spend one-third of their after-tax income on food. Thus, the poverty level is theoretically three times the annual cost of a nutritionally adequate ‘‘market basket.’’ This cost was refined by stratifying poor families by size, composition, and farm/nonfarm, and by creating different income cutoffs for poverty for families of differing types. Originally there were 124 income cutoff points, but by 1980 the separate thresholds for farm families and female-headed households had been eliminated, and the number of thresholds reduced to 48. Since 1969 the poverty line has been regularly updated using the Consumer Price Index (CPI) to adjust for increased costs. The original index was based on the least costly of four nutritionally adequate food plans developed by the Department of Agriculture. Since a 1955 Department of Agriculture survey of food consumption patterns had determined that families of three or more spent approximately one-third of their income on food, the original poverty index was simply triple the average cost of the economy food plan. This index was altered for smaller families to compensate for their higher fixed costs, and for farm families to compensate for their lower costs (the farm threshold began as 70 percent of the nonfarm for an equivalent household, and was raised to 85 percent in 1969). Originally, the poverty index was adjusted yearly by taking into account the cost of the food items in the Department of Agriculture economy budget, but this changed in 1969 to a simple CPI adjustment (U.S. Bureau of the Census 1982).

Over the years there have been many criticisms of the official poverty measure and its assumptions (for summaries and extended discussion, see U.S. DHEW 1976; Haveman 1987). The first set of problems, some argue, come from the fact that the very basis of the measure is flawed. The economy food budget at the measure’s core is derived from an outdated survey that may not reflect changes in tastes and options. Further, the multiplication of food costs by three is only appropriate for some types of families, other types must spend greater or lesser proportions on food. Some estimates indicate that the poor spend half or more of their income on food; the more well-to-do spend one-third or less. Even if the multiplier was correct, the original Department of Agriculture survey discovered it for posttax income; in the poverty measure it is applied to pretax income, though the poor pay little in taxes. Other problems often cited include the fact that the ‘‘economy budget’’ assumes sufficient knowledge for wise shopping—a dubious assumption for the poor—and the fact that the poor are often locked into paying much higher prices than average because of a lack of transportation. An additional problem is that the poverty thresholds are not updated by using changes in the actual price of food, but instead by changes in the CPI, which includes changes in the price of many other items such as clothing, shelter, transportation, fuel, medical fees, recreation, furniture, appliances, personal services, and many other items probably irrelevant to the expenses of the poor. Findings are mixed, but it is generally agreed that the losses in purchasing power suffered by the poor in inflationary periods is understated by the CPI (see Oster et al. 1978, p. 25). A second set of problems derives from the fact that the definition is based on income only. Both in-kind transfers and assets are excluded. Excluding in-kind transfers means that government-provided food, shelter, or medical care is not counted. Excluding assets means that a wealthy family with little current income could be counted as poor.

Demography of Poverty

In 1987 the average poverty threshold for a family of four was $11,611 per year (all figures in this paragraph are from U.S. Bureau of the Census 1989a, p. 163; 1989b, p. 166; 1990). This means the assumed annual cost of an adequate diet for four persons was $3,870.33, or about 88 cents per meal per person. For a single person the poverty threshold was $5,778, and the food allowance $1.76 per meal. In 1986 the poverty threshold was $11,203, allowing 85 cents per meal, and in 1988 it had risen to $12,091, or 92 cents per meal. In the United States in 1987 there were 32,341,000 persons below the poverty threshold, or 13.4 percent of the population. In 1988 there were 31,878,000, or 13.1 percent, almost a half-million fewer persons below official poverty than the year before. These figures underestimate official poverty somewhat, since they are based on the Current Population Survey, which is primarily a household survey and thus does not count the homeless not in shelters. The decline from 1987 to 1988 in the number in poverty is part of a long-term trend. In 1960 there were 8 million more—39,851,000 persons—who by today’s guidelines would have been counted as officially in poverty, representing 22.2 percent of the population. By the official, absolute standard, poverty has greatly decreased over the past three decades, both in terms of the actual number of persons below the threshold and, even more dramatically, by the percentage of the population in poverty (U.S. Bureau of the Census 1989). This decrease actually took place over two decades, since the number of people in poverty in 1970 had declined to only 25,420,000, or 12.6 percent of the population, and the number and percentage have risen since then, but never back as high as the 1960 levels. Poverty is not evenly spread over the population. Of those below the official poverty level in 1988, 57.1 percent were female, 29.6 percent were black, and 16.9 percent were Hispanic. Female-headed families with children were disproportionately poor. In poverty in 1988 were 38.2 percent of all such white families and 56.3 percent of all such black families (this is a gender phenomenon, not a single-parent one, since in 1988 only 18 percent of male-headed single parent families were below the poverty threshold). The age composition of the poor population has changed. In 1968, 38.6 percent of those in poverty were of working age (18–64), while twenty years later 49.6 percent of those in poverty were of working age. From 1968 to 1988 the percentage of the poor population over 65 declined from 18.2 percent to 10.9 percent, and the percentage who were children under 18 declined from 43.1 percent to 39.5 percent. A higher percentage of working-age poor is seen by some as a sign of worse times. It almost certainly reflects not only economic downturns but also in part ideological biases toward helping the presumably able-bodied poor; most antipoverty programs have been specifically aimed at the old or the young. O’Hare (1996) points out that the poverty rate and the number of poor in the 1990s exceed those figures in the 1970s. He notes that all the dramatic postwar decline in poverty rates occurred before 1973. After that, poverty rates in the United States rose through the early 1980s, then declined, but never back to the 1973 level.

Despite extensive debate about the policy implications of various definitions of poverty, and the inherent difficulty of locating this population, one can have confidence that the poor are being counted with reasonable precision. More than one generation of social scientists have contributed to the refinement of the measures of poverty, and existing statistical series are based on data collected by the U.S. Bureau of the Census—an organization with very high technical competence. Nonetheless, there is one group, the extremely poor, whose numbers are in doubt. All current measurement relies on the household unit, and assumes some standard type of domicile. As Rossi puts it, ‘‘our national unemployment and poverty statistics pertain only to that portion of the domiciled population that lives in conventional housing’’ (1989, p. 73). An extremely poor person living, perhaps temporarily, in a household where other adults had sufficient income would not be counted as being in poverty. Even more important, the literally homeless who live on the street, and those whose homes consist of hotels, motels, rooming houses, or shelters are not counted at all in the yearly Current Population Survey (the decennial census does attempt to count those in temporary quarters, but the 1990 census was the first to even attempt to count those housed in unconventional ways or not at all). The studies of Rossi and his colleagues indicate that the number of extremely poor people in the United States (those whose income is less than two-thirds of the poverty level) is somewhere between four and seven million. The number of literally homeless poor people, those who do not figure into the official poverty counts, must be estimated. The best available estimate is that they number between 250,000 and 350,000, about 5–8 percent of the extremely poor population (Rossi 1989). The number of extremely poor people has more than doubled since 1970, while the population was increasing only by 20 percent (Rossi 1989, p. 78). The extremely poor are at considerable risk of becoming literally homeless. When they do so, they will disappear from official statistics (just as the unemployed cease being officially unemployed soon after they give up the search for work). To see that official statistics remain reliable in the face of increasing extreme poverty is the most recent methodological challenge in the field.

Poverty in Low-income Countries

Most of the discussion thus far has concerned poverty in the United States. Comparing poverty across countries is a difficult enterprise, but is important to do if one is to put poverty in any individual country in perspective. Poverty in the United States and in other highly industrialized countries simply does not fall into the same category as poverty in less industrialized nations. Many of those classified as in poverty in the United States would be seen as reasonably well off by international standards. This means that, although many countries report a ‘‘percentage-in-poverty’’ figure for their populations, these figures cannot be sensibly compared, since the concept of what constitutes poverty varies so widely. Statistics from the United Nations Development Program 1998 Human Development Report can illustrate this stark contrast. In 1998, in the fourty-four countries the U.N. classifies as ‘‘least developed,’’ about 29 percent of the population was not expected to survive to age 40. Compare this to the 5 percent not expected to survive to that age in the industrial countries. In the least developed countries, 43 percent of the population has no access to safe water, 64 percent no access to sanitation, and 51 percent no access to health services (considering all developing countries rather than just the poorest, those figures would be 29 percent, 20 percent, and 58 percent, respectively). This level of living is characteristic of very few people in industrial nations, making poverty comparisons almost meaningless. The World Bank has attempted to provide international comparisons of poverty by creating a measure of the percentage of a country’s population living on less than $1 a day, calculated in 1985 international prices and ‘‘adjusted to local currency using purchasing power parities’’ (World Bank 1999, p. 69). The $1-a-day figure was chosen because this is the typical poverty line in low-income countries. World Bank figures indicate that about 1.3 billion people live on less than $1 a day. Twenty-seven countries in which over 25 percent of the population lives with resources at less than this level were counted, as were fourteen countries (Guatemala, Guinea-Bissau, Honduras, India, Kenya, Lesotho, Madagascar, Nepal, Niger, Peru, Rwanda, Senegal, Uganda, Zambia) where approximately half or more of the population lives on less than $1 a day.

It is clear that while relative poverty is a serious moral and political issue in industrial countries, absolute poverty—at levels unheard of in industrial countries—is a far more serious problem in much of the rest of the world.

The study of poverty is a difficult field, and is not properly a purely sociological endeavor. As even this brief overview shows, a thorough understanding requires the combined talents of sociologists, economists, demographers, political scientists, historians, and philosophers. All these fields have contributed to our understanding of the phenomenon.

Related Links

This Aricle was Written by
WAYNE J. VILLEMEZ

This Article was Published in
ENCYCLOPEDIA OF SOCIOLOGY
Second Edition
A Book by

EDGAR F BORGATTA
Editor-in-Chief
University of Washington, Seattle

AND

RHONDA J. V. MONTGOMERY
Managing Editor
University of Kansas, Lawrence

 

 
 
edu.learnsoc.org Copyright 2010 - 2012 © All Rights Reserved
 
  Home | About | Contact | Links