Shared Flashcard Set


Undergraduate 1

Additional Sociology Flashcards




Auguste Comte
often called the “father of sociologyHe believed that all societies develop and progress through the following stages: religious, metaphysical, and scientific. Comte argued that society needs scientific knowledge based on facts and evidence to solve its problems—not speculation and superstition, which characterize the religious and metaphysical stages of social development. Comte viewed the science of sociology as consisting of two branches: dynamics, or the study of the processes by which societies change; and statics, or the study of the processes by which societies endure. He also envisioned sociologists as eventually developing a base of scientific social knowledge that would guide society into positive directions.

Read more:,articleId-26835.html#ixzz0rOp8EQGa
Herbert Spencer
The 19th-century Englishman Herbert Spencer (1820–1903) compared society to a living organism with interdependent parts. Change in one part of society causes change in the other parts, so that every part contributes to the stability and survival of society as a whole. If one part of society malfunctions, the other parts must adjust to the crisis and contribute even more to preserve society. Family, education, government, industry, and religion comprise just a few of the parts of the “organism” of society.

Spencer suggested that society will correct its own defects through the natural process of “survival of the fittest.” The societal “organism” naturally leans toward homeostasis, or balance and stability. Social problems work themselves out when the government leaves society alone. The “fittest”—the rich, powerful, and successful—enjoy their status because nature has “selected” them to do so. In contrast, nature has doomed the “unfit”—the poor, weak, and unsuccessful—to failure. They must fend for themselves without social assistance if society is to remain healthy and even progress to higher levels. Governmental interference in the “natural” order of society weakens society by wasting the efforts of its leadership in trying to defy the laws of nature.

Read more:,articleId-26835.html#ixzz0rOpTQqiy
Karl Marx
Not everyone has shared Spencer's vision of societal harmony and stability. Chief among those who disagreed was the German political philosopher and economist Karl Marx (1818–1883), who observed society's exploitation of the poor by the rich and powerful. Marx argued that Spencer's healthy societal “organism” was a falsehood. Rather than interdependence and stability, Marx claimed that social conflict, especially class conflict, and competition mark all societies.

The class of capitalists that Marx called the bourgeoisie particularly enraged him. Members of the bourgeoisie own the means of production and exploit the class of laborers, called the proletariat, who do not own the means of production. Marx believed that the very natures of the bourgeoisie and the proletariat inescapably lock the two classes in conflict. But he then took his ideas of class conflict one step further: He predicted that the laborers are not selectively “unfit,” but are destined to overthrow the capitalists. Such a class revolution would establish a “class-free” society in which all people work according to their abilities and receive according to their needs.

Unlike Spencer, Marx believed that economics, not natural selection, determines the differences between the bourgeoisie and the proletariat. He further claimed that a society's economic system decides peoples' norms, values, mores, and religious beliefs, as well as the nature of the society's political, governmental, and educational systems. Also unlike Spencer, Marx urged people to take an active role in changing society rather than simply trusting it to evolve positively on its own.

Read more:,articleId-26835.html#ixzz0rOph2Rjl
Emile Durkheim
Despite their differences, Marx, Spencer, and Comte all acknowledged the importance of using science to study society, although none actually used scientific methods. Not until Emile Durkheim (1858–1917) did a person systematically apply scientific methods to sociology as a discipline. A French philosopher and sociologist, Durkheim stressed the importance of studying social facts, or patterns of behavior characteristic of a particular group. The phenomenon of suicide especially interested Durkheim. But he did not limit his ideas on the topic to mere speculation. Durkheim formulated his conclusions about the causes of suicide based on the analysis of large amounts of statistical data collected from various European countries.

Durkheim certainly advocated the use of systematic observation to study sociological events, but he also recommended that sociologists avoid considering people's attitudes when explaining society. Sociologists should only consider as objective “evidence” what they themselves can directly observe. In other words, they must not concern themselves with people's subjective experiences.

Read more:,articleId-26835.html#ixzz0rOpvU2nL
Max Weber
The German sociologist Max Weber (1864–1920) disagreed with the “objective evidence only” position of Durkheim. He argued that sociologists must also consider people's interpretations of events—not just the events themselves. Weber believed that individuals' behaviors cannot exist apart from their interpretations of the meaning of their own behaviors, and that people tend to act according to these interpretations. Because of the ties between objective behavior and subjective interpretation, Weber believed that sociologists must inquire into people's thoughts, feelings, and perceptions regarding their own behaviors. Weber recommended that sociologists adopt his method of Verstehen (vûrst e hen), or empathetic understanding. Verstehen allows sociologists to mentally put themselves into “the other person's shoes” and thus obtain an “interpretive understanding” of the meanings of individuals' behaviors.

Read more:,articleId-26835.html#ixzz0rOq88v1K
is the scientific study of human groups and social behavior. Sociologists focus primarily on human interactions, including how social relationships influence people's attitudes and how societies form and change. Sociology, therefore, is a discipline of broad scope: Virtually no topic—gender, race, religion, politics, education, health care, drug abuse, pornography, group behavior, conformity—is taboo for sociological examination and interpretation.

Read more:,articleId-26833.html#ixzz0rOqN1MPf
Scientific Method for Sociology
An area of inquiry is a scientific discipline if its investigators use the scientific method, which is a systematic approach to researching questions and problems through objective and accurate observation, collection and analysis of data, direct experimentation, and replication (repeating) of these procedures. Scientists affirm the importance of gathering information carefully, remaining unbiased when evaluating information, observing phenomena, conducting experiments, and accurately recording procedures and results. They are also skeptical about their results, so they repeat their work and have their findings confirmed by other scientists.

Read more:,articleId-26842.html#ixzz0rOqjh0uf
Basic Sociological Research Concepts
An investigator begins a research study
after evolving ideas from a specific theory, which is an integrated set of statements for explaining various phenomena. Because a theory is too general to test, the investigator devises a hypothesis, or testable prediction, from the theory, and tests this instead. The results of the research study either disprove or do not disprove the hypothesis. If disproved, the investigator cannot make predictions based on the hypothesis, and must question the accuracy of the theory. If not disproved, the scientist can make predictions based on the hypothesis.

Read more:,articleId-26843.html#ixzz0rOqwYP4b
Sociological Research: Designs, Methods
Sociologists use many different designs and methods to study society and social behavior. Most sociological research
involves ethnography, or “field work” designed to depict the characteristics of a population as fully as possible.

Read more:,articleId-26844.html#ixzz0rOrEM1Xq
Ethics in Sociological Research
Ethics are self-regulatory guidelines for making decisions and defining professions. By establishing ethical codes, professional organizations maintain the integrity of the profession, define the expected conduct of members, and protect the welfare of subjects and clients. Moreover, ethical codes give professionals
direction when confronting ethical dilemmas, or confusing situations. A case in point is a scientist's decision whether to intentionally deceive subjects or inform them about the true risks or goals of a controversial but much-needed experiment. Many organizations, such as the American Sociological Association and the American Psychological Association, establish ethical principles and guidelines. The vast majority of today's social scientists abide by their respective organizations' ethical principles.

Read more:,articleId-26845.html#ixzz0rOrYfRqS
“Unsocialized” Children
Socialization is the process whereby infants and children develop into social beings. Among other things, children develop a sense of self, memory, language, and intellect. And in doing so, they learn from their elders the attitudes, values, and proper social behaviors of the culture into which they were born. Becoming socialized benefits the individual by giving him or her the tools needed for success in the native culture, and also benefits the society by providing continuity over time and preserving its essential nature from generation to generation. In other words, socialization connects different generations to each other.

Stories of children found after years of living in the “wild” without any human contact occasionally appear in the literature. One of the most commonly cited examples is the Boy of Aveyron who emerged little more than a “beast” from a forest in France in 1798. “Unsocialized” children such as this boy typically look more animal than human, prefer to remain naked (at least at first upon being discovered), lack human speech, have no sense of personal hygiene, fail to recognize themselves in a mirror, show little or no reasoning ability, and respond only partially to attempts to help them change from “animal into human.” The phenomenon of feral (literally wild or untamed) children sparks much discussion regarding the nature versus nurture debate because research shows that the state of these children seems to suggest the important role that learning plays in normal human development.

Social scientists emphasize that socialization is intimately related to cognitive, personality, and social development. They argue that socialization primarily occurs during infancy and childhood, although they acknowledge that humans continue to grow and adapt throughout the lifespan. Sociologists also refer to the driving forces behind socialization as socializing agents, which include family, friends, peers, school, work, the mass media, and religion.

Read more:,articleId-26858.html#ixzz0rOy7vThP
Culture and Society Defined
Culture consists of the beliefs, behaviors, objects, and other characteristics common to the members of a particular group or society. Through culture, people and groups define themselves, conform to society's shared values, and contribute to society. Thus, culture includes many societal aspects: language, customs, values, norms, mores, rules, tools, technologies, products, organizations, and institutions. This latter term institution refers to clusters of rules and cultural meanings associated with specific social activities
. Common institutions are the family, education, religion, work, and health care.

Popularly speaking, being cultured means being well-educated, knowledgeable of the arts, stylish, and well-mannered. High culture—generally pursued by the upper class—refers to classical music, theater, fine arts, and other sophisticated pursuits. Members of the upper class can pursue high art because they have cultural capital, which means the professional credentials, education, knowledge, and verbal and social skills necessary to attain the “property, power, and prestige” to “get ahead” socially. Low culture, or popular culture—generally pursued by the working and middle classes—refers to sports, movies, television sitcoms and soaps, and rock music. Remember that sociologists define culture differently than they do cultured, high culture, low culture, and popular culture.

Sociologists define society as the people who interact in such a way as to share a common culture. The cultural bond may be ethnic or racial, based on gender, or due to shared beliefs, values, and activities. The term society can also have a geographic meaning and refer to people who share a common culture in a particular location. For example, people living in arctic climates developed different cultures from those living in desert cultures. In time, a large variety of human cultures arose around the world.

Culture and society are intricately related. A culture consists of the “objects” of a society, whereas a society consists of the people who share a common culture. When the terms culture and society first acquired their current meanings, most people in the world worked and lived in small groups in the same locale. In today's world of 6 billion people, these terms have lost some of their usefulness because increasing numbers of people interact and share resources globally. Still, people tend to use culture and society in a more traditional sense: for example, being a part of a “racial culture” within the larger “U.S. society.”
Culture's Roots: Biological or Societal?
The nature versus nurture debate continues to rage in the social sciences. When applied to human culture, proponents of the “nature” side of the debate maintain that human genetics creates cultural forms common to people everywhere. Genetic mutations and anomalies, then, give rise to the behavioral and cultural differences encountered across and among human groups. These differences potentially include language, food and clothing preferences, and sexual attitudes, to name just a few. Proponents of the “nurture” side of the debate maintain that humans are a tabula rasa (French for “blank slate”) upon which everything is learned, including cultural norms. This fundamental debate has given social scientists and others insights into human nature and culture, but no solid conclusions.

More recently, social learning theorists and sociobiologists have added their expertise and opinions to the debate. Social learning theorists hold that humans learn social behaviors within social contexts. That is, behavior is not genetically driven but socially learned. On the other hand, sociobiologists argue that, because specific behaviors like aggression are common among all human groups, a natural selection must exist for these behaviors similar to that for bodily traits like height. Sociobiologists also hold that people whose “selected” behaviors lead to successful social adaptation more likely reproduce and survive. One generation can genetically transmit successful behavioral characteristics to the next generation.

Today, sociologists generally endorse social learning theory to explain the emergence of culture. That is, they believe that specific behaviors result from social factors that activate physiological predispositions, rather than from heredity and instincts, which are biologically fixed patterns of behavior. Because humans are social beings, they learn their behaviors (and beliefs, attitudes, preferences, and the like) within a particular culture. Sociologists find evidence for this social learning position when studying cultural universals, or features common to all cultures. Although most societies do share some common elements, sociologists have failed to identify a universal human nature that should theoretically produce identical cultures everywhere. Among other things, language, preference for certain types of food, division of labor, methods of socialization, rules of governance, and a system of religion represent typical cultural features across societies. Yet all these are general rather than specific features of culture. For example, all people consume food of one type or another. But some groups eat insects, while others do not. What one culture accepts as “normal” may vary considerably from what another culture accepts.
Material and Non‐Material Culture
Sociologists describe two interrelated aspects of human culture: the physical objects of the culture and the ideas associated with these objects.

Material culture refers to the physical objects, resources, and spaces that people use to define their culture. These include homes, neighborhoods, cities, schools, churches, synagogues, temples, mosques, offices, factories and plants, tools, means of production, goods and products, stores, and so forth. All of these physical aspects of a culture help to define its members' behaviors and perceptions. For example, technology is a vital aspect of material culture in today's United States. American students must learn to use computers to survive in college and business, in contrast to young adults in the Yanomamo society in the Amazon who must learn to build weapons and hunt.

Non-material culture refers to the nonphysical ideas that people have about their culture, including beliefs, values, rules, norms, morals, language, organizations, and institutions. For instance, the non-material cultural concept of religion consists of a set of ideas and beliefs about God, worship
, morals, and ethics. These beliefs, then, determine how the culture responds to its religious topics, issues, and events.

When considering non-material culture, sociologists refer to several processes that a culture uses to shape its members' thoughts, feelings, and behaviors. Four of the most important of these are symbols, language, values, and norms.
Symbols and Language in Human Culture
To the human mind, symbols are cultural representations of reality. Every culture has its own set of symbols associated with different experiences and perceptions. Thus, as a representation, a symbol's meaning is neither instinctive nor automatic. The culture's members must interpret and over time reinterpret the symbol.

Symbols occur in different forms: verbal or nonverbal, written or unwritten. They can be anything that conveys a meaning, such as words on the page, drawings, pictures, and gestures. Clothing, homes, cars, and other consumer items are symbols that imply a certain level of social status.

Perhaps the most powerful of all human symbols is language—a system of verbal and sometimes written representations that are culturally specific and convey meaning about the world. In the 1930s, Edward Sapir and Benjamin Lee Whorf proposed that languages influence perceptions. While this Sapir-Whorf hypothesis—also called the linguistic relativity hypothesis—is controversial, it legitimately suggests that a person will more likely perceive differences when he or she possesses words or concepts to describe the differences.

Language is an important source of continuity and identity in a culture. Some groups, such as the French-speaking residents of Quebec in Canada, refuse to speak English, which is Canada's primary language, for fear of losing their cultural identity. In the United States, immigrants provide much resistance to making English the official national language.

Read more:,articleId-26851.html#ixzz0rOu8GylH
Cultural Values
A culture
's values are its ideas about what is good, right, fair, and just. Sociologists disagree, however, on how to conceptualize values. Conflict theory focuses on how values differ between groups within a culture, while functionalism focuses on the shared values within a culture. For example, American sociologist Robert K. Merton suggested that the most important values in American society are wealth, success, power, and prestige, but that everyone does not have an equal opportunity to attain these values. Functional sociologist Talcott Parsons noted that Americans share the common value of the “American work ethic,” which encourages hard work. Other sociologists have proposed a common core of American values, including accomplishment, material success, problem-solving, reliance on science and technology, democracy, patriotism, charity, freedom, equality and justice, individualism, responsibility, and accountability.

A culture, though, may harbor conflicting values. For instance, the value of material success may conflict with the value of charity. Or the value of equality may conflict with the value of individualism. Such contradictions may exist due to an inconsistency between people's actions and their professed values, which explains why sociologists must carefully distinguish between what people do and what they say. Real culture refers to the values and norms that a society actually follows, while ideal culture refers to the values and norms that a society professes to believe.
Cultural Norms
Norms are the agreed-upon expectations and rules by which a culture guides the behavior of its members in any given situation. Of course, norms vary widely across cultural groups. Americans, for instance, maintain fairly direct eye contact when conversing with others. Asians, on the other hand, may avert their eyes as a sign of politeness and respect.

Sociologists speak of at least four types of norms: folkways, mores, taboos, and laws. Folkways, sometimes known as “conventions” or “customs,” are standards of behavior that are socially approved but not morally significant. For example, belching loudly after eating dinner at someone else's home breaks an American folkway. Mores are norms of morality. Breaking mores, like attending church in the nude, will offend most people of a culture. Certain behaviors are considered taboo, meaning a culture absolutely forbids them, like incest in U.S. culture. Finally, laws are a formal body of rules enacted by the state and backed by the power of the state. Virtually all taboos, like child abuse, are enacted into law, although not all mores are. For example, wearing a bikini to church may be offensive, but it is not against the law.

Members of a culture must conform to its norms for the culture to exist and function. Hence, members must want to conform and obey rules. They first must internalize the social norms and values that dictate what is “normal” for the culture; then they must socialize, or teach norms and values to, their children. If internalization and socialization fail to produce conformity, some form of “social control” is eventually needed. Social control may take the form of ostracism, fines, punishments, and even imprisonment.
Cultural Diversity
Many people mistakenly use such phrases as “American culture,” “white culture,” or “Western culture,” as if such large, common, and homogenous cultures exist in the United States today. These people fail to acknowledge the presence of cultural diversity, or the presence of multiple cultures and cultural differences within a society. In reality, many different cultural groups comprise the United States.
Click here to find out more!

Smaller cultural groups that exist within but differ in some way from the prevailing culture interest sociologists. These groups are called subcultures. Examples of some subcultures include “heavy metal” music devotees, body-piercing and tatoo enthusiasts, motorcycle gang members, and Nazi skinheads. Members of subcultures typically make use of distinctive language, behaviors, and clothing, even though they may still accept many of the values of the dominant culture.

Ethnic groups living in the United States—such as Greek Americans, Italian Americans, Irish Americans, Mexican Americans, and African Americans—may also form subcultures. Most of these adjust to mainstream America, but may still retain many of their cultural customs and in some cases their native ethnic language.

Read more:,articleId-26854.html#ixzz0rOvmnac4
A counterculture comes about in opposition to the norms and values of the dominant culture. Members of countercultures—such as hippies and protest groups—are generally teenagers and young adults, because youth is often a time of identity crisis and experimentation. In time many, but not all, members of countercultures eventually adopt the norms and values of the dominant culture.

Read more:,articleId-26854.html#ixzz0rOvyqHuH
Many people see the United States as “a melting pot” comprised of a variety of different cultural, subcultural, and countercultural groups. When the mainstream absorbs these groups, they have undergone assimilation. However, people today increasingly recognize the value of coexisting cultural groups who do not lose their identities.
This perspective of multiculturalism respects cultural variations rather than requiring that the dominant culture assimilate the various cultures. It holds that certain shared cultural tenets are important to society as a whole, but that some cultural differences are important, too. For example, children in schools today are being taught that the United States is not the only culture in the world, and that other viewpoints may have something to offer Americans.

Read more:,articleId-26854.html#ixzz0rOwYHee4
Ethnocentrism involves judging other cultures against the standards of one's own culture. Norms within a culture frequently translate into what is considered “normal,” so that people think their own way of doing things is “natural.” These same people also judge other people's ways of doing things as “unnatural.” In other words, they forget that what may be considered normal in America is not necessarily so in another part of the world.

Read more:,articleId-26854.html#ixzz0rOwl1Kjm
cultural relativism
A potentially problematic form of ethnocentrism is nationalism, or an overly enthusiastic identification with a particular nation. Nationalism often includes the notion that a particular nation has a God-given or historical claim to superiority. Such nationalism, for instance, was a special problem in World War II Nazi Germany.

Sociologists strive to avoid ethnocentric judgments. Instead, they generally embrace cultural relativism, or the perspective that a culture should be sociologically evaluated according to its own standards, and not those of any other culture. Thus, sociologists point out that there really are no good or bad cultures. And they are better able to understand the standards of other cultures because they do not assume their own is somehow better.

Read more:,articleId-26854.html#ixzz0rOwys0OZ
Toward a Global Culture
Some sociologists today predict that the world is moving closer to a global culture, void of cultural diversity
. A fundamental means by which cultures come to resemble each other is via the phenomenon of cultural diffusion, or the spreading of standards across cultures. Cultures have always influenced each other through travel, trade, and even conquest. As populations today travel and settle around the globe, however, the rate of cultural diffusion is increasing dramatically. Examples of social forces that are creating a global culture include electronic communications (telephones, e-mail, fax machines), the mass media (television, radio, film), the news media, the Internet, international businesses and banks, and the United Nations—to name only a few. Even phrases like “global village” seem to imply that the world is growing “smaller” every day.
Click here to find out more!

Still, while many aspects of culture have been globalized, local societies and cultures remain stable and, in many instances, are being affirmed with enthusiasm. Although people may relocate on the other side of the planet, they tend to remain faithful to their culture of origin.

Read more:,articleId-26855.html#ixzz0rOxIfb4P
Types of Societies
Although humans have established many types of societies throughout history, sociologists and anthropologists (experts who study early and tribal cultures) usually refer to six basic types of societies, each defined by its level of technology
Learn more about Nashville Auto-Diesel College
Hunting and gathering societies

The members of hunting and gathering societies primarily survive by hunting animals, fishing, and gathering plants. The vast majority of these societies existed in the past, with only a few (perhaps a million people total) living today on the verge of extinction.

To survive, early human societies completely depended upon their immediate environment. When the animals left the area, the plants died, or the rivers dried up, the society had to relocate to an area where resources were plentiful. Consequently, hunting and gathering societies, which were typically small, were quite mobile. In some cases, where resources in a locale were extraordinarily plentiful, small villages might form. But most hunting and gathering societies were nomadic, moving constantly in search of food and water.

Labor in hunting and gathering societies was divided equally among members. Because of the mobile nature of the society, these societies stored little in the form of surplus goods. Therefore, anyone who could hunt, fish, or gather fruits and vegetables
did so. These societies probably also had at least some division of labor based on gender. Males probably traveled long distances to hunt and capture larger animals. Females hunted smaller animals, gathered plants, made clothing, protected and raised children, and helped the males to protect the community from rival groups.

Hunting and gathering societies were also tribal. Members shared an ancestral heritage and a common set of traditions and rituals. They also sacrificed their individuality for the sake of the larger tribal culture.
Pastoral societies

Members of pastoral societies, which first emerged 12,000 years ago, pasture animals for food and transportation. Pastoral societies still exist today, primarily in the desert lands of North Africa where horticulture and manufacturing are not possible.

Domesticating animals allows for a more manageable food supply than do hunting and gathering. Hence, pastoral societies are able to produce a surplus of goods, which makes storing food for future use a possibility. With storage comes the desire to develop settlements that permit the society to remain in a single place for longer periods of time. And with stability comes the trade of surplus goods between neighboring pastoral communities.

Pastoral societies allow certain of its members (those who are not domesticating animals) to engage in nonsurvival activities. Traders, healers, spiritual leaders, craftspeople, and people with other specialty professions appear.
Horticultural societies

Unlike pastoral societies that rely on domesticating animals, horticultural societies rely on cultivating fruits, vegetables, and plants. These societies first appeared in different parts of the planet about the same time as pastoral societies. Like hunting and gathering societies, horticultural societies had to be mobile. Depletion of the land's resources or dwindling water supplies, for example, forced the people to leave. Horticultural societies occasionally produced a surplus, which permitted storage as well as the emergence of other professions not related to the survival of the society.
Agricultural societies

Agricultural societies use technological advances
to cultivate crops (especially grains like wheat, rice, corn, and barley) over a large area. Sociologists use the phrase Agricultural Revolution to refer to the technological changes
that occurred as long as 8,500 years ago that led to cultivating crops and raising farm animals. Increases in food supplies then led to larger populations than in earlier communities. This meant a greater surplus, which resulted in towns that became centers of trade supporting various rulers, educators, craftspeople, merchants, and religious leaders who did not have to worry about locating nourishment.

Greater degrees of social stratification appeared in agricultural societies. For example, women previously had higher social status because they shared labor more equally with men. In hunting and gathering societies, women even gathered more food than men. But as food stores improved and women took on lesser roles in providing food for the family, they became more subordinate to men.

As villages and towns expanded into neighboring areas, conflicts with other communities inevitably occurred. Farmers provided warriors with food in exchange for protection against invasion by enemies. A system of rulers with high social status also appeared. This nobility organized warriors to protect the society from invasion. In this way, the nobility managed to extract goods from the “lesser” persons of society.
Feudal societies

From the 9th to 15th centuries, feudalism was a form of society based on ownership of land. Unlike today's farmers, vassals under feudalism were bound to cultivating their lord's land. In exchange for military protection, the lords exploited the peasants into providing food, crops, crafts, homage, and other services to the owner of the land. The caste system of feudalism was often multigenerational; the families of peasants may have cultivated their lord's land for generations.

Between the 14th and 16th centuries, a new economic system emerged that began to replace feudalism. Capitalism is marked by open competition in a free market, in which the means of production are privately owned. Europe's exploration of the Americas served as one impetus for the development of capitalism. The introduction of foreign metals, silks, and spices stimulated great commercial activity in Europe.
Industrial societies

Industrial societies are based on using machines (particularly fuel-driven ones) to produce goods. Sociologists refer to the period during the 18th century when the production of goods in mechanized factories began as the Industrial Revolution. The Industrial Revolution appeared first in Britain, and then quickly spread to the rest of the world.

As productivity increased, means of transportation improved to better facilitate the transfer of products from place to place. Great wealth was attained by the few who owned factories, and the “masses” found jobs working in the factories.

Industrialization brought about changes in almost every aspect of society. As factories became the center of work, “home cottages” as the usual workplace became less prevalent, as did the family's role in providing vocational training and education. Public education via schools and eventually the mass media became the norm. People's life expectancy increased as their health improved. Political institutions changed into modern models of governance. Cultural diversity increased, as did social mobility. Large cities emerged as places to find jobs in factories. Social power moved into the hands of business elites and governmental officials, leading to struggles between industrialists and workers. Labor unions and welfare organizations formed in response to these disputes and concerns over workers' welfare, including children who toiled in factories. Rapid changes in industrial technology also continued, especially the production of larger machines and faster means of transportation. The Industrial Revolution also saw to the development of bureaucratic forms of organization, complete with written rules, job descriptions, impersonal positions, and hierarchical methods of management.
Postindustrial societies

Sociologists note that with the advent of the computer microchip, the world is witnessing a technological revolution. This revolution is creating a postindustrial society based on information, knowledge, and the selling of services. That is, rather than being driven by the factory production of goods, society is being shaped by the human mind, aided by computer technology. Although factories will always exist, the key to wealth and power seems to lie in the ability to generate, store, manipulate, and sell information.

Sociologists speculate about the characteristics of postindustrial society in the near future. They predict increased levels of education and training, consumerism, availability of goods, and social mobility. While they hope for a decline in inequality as technical skills and “know-how” begin to determine class rather than the ownership of property, sociologists are also concerned about potential social divisions based on those who have appropriate education and those who do not. Sociologists believe society will become more concerned with the welfare of all members of society. They hope postindustrial society will be less characterized by social conflict, as everyone works together to solve society's problems through science.

Read more:,articleId-26856.html#ixzz0rOxhGNUS
Piaget's Model of Cognitive Development
Much of modern cognitive theory, including its relationship to socialization, stems from the work of the Swiss psychologist, Jean Piaget. In the 1920s Piaget observed children reasoning and understanding differently, depending on their age. He proposed that all children progress through a series of cognitive stages of development, just as they progress through a series of physical stages of development. According to Piaget, the rate at which children pass through these cognitive stages may vary, but they eventually pass through all of them in the same order.
Learn more about Southwestern College

Piaget introduced several other important concepts. According to Piaget, cognitive development occurs from two processes: adaptation and equilibrium. Adaptation involves the child's changing to meet situational demands. Adaptation involves two sub-processes: assimilation and accommodation. Assimilation is the application of previous concepts to new concepts. An example is the child who refers to a whale as a “fish.” Accommodation is the altering of previous concepts in the face of new information. An example is the child who discovers that some creatures living in the ocean are not fish, and then correctly refers to a whale as a “mammal.” Equilibrium is the search for “balance” between self and the world, and involves the matching of the child's adaptive functioning to situational demands. Equilibrium keeps the infant moving along the developmental pathway, allowing him or her to make increasingly effective adaptations.

A brief summary of Piaget's four stages of cognitive development appears in Table 1 .
TABLE 1 Piaget's Stages of Cognitive Development



Characteristics of Stage



The child learns by doing: looking, touching, sucking. The child also has a primitive understanding of cause-and-effect relationships. Object permanence appears around 9 months.



The child uses language and symbols, including letters and numbers. Egocentrism is also evident. Conservation marks the end of the preoperational stage and the beginning of concrete operations.

Concrete Operations


The child demonstrates conservation, reversibility, serial ordering, and a mature understanding of cause-and-effect relationships. Thinking at this stage is still concrete.

Formal Operations


The individual demonstrates abstract thinking, including logic, deductive reasoning, comparison, and classification.

Read more:,articleId-26859.html#ixzz0rOyRlPOu
Cognitive Development: Age 0–6
During Piaget's sensorimotor stage (birth to age 2), infants and toddlers learn by doing: looking, hearing, touching, grasping, sucking. The process appears to begin with primitive “thinking” that involves coordinating movements of the body with incoming sensory data. As infants intentionally attempt to interact with the environment, they learn that certain actions lead to specific consequences. This is the beginning of the infants' understanding of cause-and-effect relationships.
Click here to find out more!

Piaget referred to the cognitive development occurring between ages 2 and 7 as the preoperational stage. In this stage, children increase their use of language and other symbols, imitation of adult behaviors, and play. Young children develop a fascination with words—both “good” and “bad.” They also play “pretend” games. Piaget also described this stage in terms of what children cannot do. He used the term operational to refer to reversible abilities that children had not yet developed. By reversible, Piaget meant actions that children perform in their mind, but that can occur in either direction. Adding (3 + 3 = 6) and subtracting (6 − 3 = 3) are examples of reversible actions.

Piaget believed that egocentrism—the inability to distinguish between one's own point of view and those of others—limits preschoolers' cognitive abilities. The capacity for egocentricity exists at all stages of cognitive development, but becomes particularly apparent during the preschool years. Young children eventually overcome this early form of egocentrism when they learn that others have different views, feelings, and desires. Then they can interpret other's motives, and use those interpretations to communicate mutually—and therefore more effectively—with others. Preschoolers eventually learn to adjust their vocal pitch, tone, and speed to match those of the listener. Because mutual communication requires effort and preschoolers are still egocentric, they may lapse into egocentric speech (non-mutual) during times of frustration. That is, children may regress to earlier behavioral patterns when their cognitive resources become stressed and overwhelmed.

Piaget also believed that young children cannot grasp the concept of conservation, which is the concept that physical properties remain constant even as appearance and form changes. They have trouble understanding that the same amount of liquid poured into containers of different shapes remains the same. A preoperational child will tell you that a short, fat bottle does not contain the same amount of liquid as a tall skinny one. Similarly, a preoperational child will tell you that a handful of pennies is more money than a single five-dollar bill. When children develop the cognitive capacity to conserve at around age 7, according to Piaget they move into the next stage of development, concrete operations.

Read more:,articleId-26860.html#ixzz0rOynDoTP
Cognitive Development: Age 7–11
Piaget referred to the cognitive development occurring between ages 7 and 11 as the concrete operations stage. While in concrete operations, children cannot think logically and abstractly. They are limited to thinking “concretely,” or in tangible, definite, exact, and uni-directional terms based on real and concrete experiences rather than on logical abstractions. These children do not use “magical thinking,” so they are not as easily misled as younger children.
Click here to find out more!

Piaget noted that children's thinking changes significantly during the concrete operations stage. They can engage in classification, which is the ability to group according to features, and serial ordering, which is the ability to group according to logical progression. Older children come to understand cause-and-effect relationships, so they become adept at mathematics and science. They also comprehend the concept of stable identity—that “self” remains constant even when circumstances change. For example, older children know that their father maintains a male identity regardless of what he wears or how old he becomes.

In Piaget's view, children at the beginning of concrete operations do demonstrate conservation. Unlike preschoolers, school-age children understand that the same amount of clay molded into different shapes remains the same. Children in concrete operations have also advanced beyond the egocentrism of preschoolers. By the school years, children have usually learned that other people have their own views, feelings, and desires.

Read more:,articleId-26861.html#ixzz0rOz0rLvQ
Cognitive Development: Age 12–19
Most adolescents reach Piaget's stage of formal operations (ages 12 and older), in which they develop new tools for manipulating information. Previously as children, they could only think concretely. But now in formal operations, they can think abstractly and deductively. Adolescents in this stage can also consider future possibilities, search for answers, deal flexibly with problems, test hypotheses, and draw conclusions about events they have not experienced first-hand.
Click here to find out more!

Cognitive maturity occurs as the brain matures and the social network expands, offering more opportunities for experimenting with life. Because this worldly experience plays a large role in attaining formal operations, not all adolescents enter this stage of cognitive development. Studies indicate, however, that abstract and critical reasoning skills are teachable. For example, everyday reasoning improves between the first and last years of college, which suggests the value of education in cognitive maturation.

Read more:,articleId-26862.html#ixzz0rOzFnui9
Social and Personality Growth: Age 0–2
During infancy and toddlerhood, children easily attach to others. They normally form their initial primary relationship with their parents and other family members. Because infants depend completely on their parents for food, clothing, warmth, and nurturing, Erik Erikson noted that the primary task during this first psychosocial stage of life is to learn to trust (rather than to mistrust) the caregivers. The child's first few years—including forming relationships and developing an organized sense of self—set the stage for both immediate and later psychosocial development, including the emergence of prosocial behavior, or the capacity to help, cooperate, and share with others. (Table 1 contrasts Erikson's model of psycho-social development with Sigmund Freud's model.)
TABLE 1 Contrasting Models of Psychosocial Development

Period (Age)

Freud's Stage

Erikson's Task or Crisis

Infancy (0–1)


Trust vs. mistrust

Toddlerhood and early childhood (1–3)


Autonomy vs. shame

Early childhood (3–6)


Initiative vs. guilt

Middle childhood (7–11)


Industry vs. inferiority

Adolescence (12–19)


Identity vs. confusion

Early adulthood (20–45)

Intimacy vs. isolation

Middle adulthood (45–65)

Generativity vs. stagnation

Late adulthood (65+)

Integrity vs. despair
Click here to find out more!

Personality includes those stable psychological characteristics that define each human being as unique. Both children and adults evidence personality traits (long-term characteristics, such as temperament) and states (changeable characteristics, such as moodiness). While considerable debate continues over the etiology of personality, most experts agree that personality traits and states form early in life. A combination of genetics and psychological and social influences likely influence the formation of personality.

Infants are typically egocentric, or self-centered. They primarily concern themselves with satisfying their physical desires (for example, hunger), which psychoanalyst Sigmund Freud theorized is a form of self-pleasuring. Because infants are particularly interested in activities involving the mouth (sucking, biting), Freud labeled the first year of life as the oral stage of psychosexual development. (Freud's model of psychosexual development appears in Table 1 .)

According to Freud, too little or too much stimulation of a particular erogenous zone (sensitive area of the body) at a particular psychosexual stage of development leads to fixation (literally, being “stuck”) at that stage. Multiple fixations are possible at multiple stages. In the case of infants, fixation at the oral stage gives rise to adult personality traits centered around the mouth. Adult “oral focused habits” may take the form of overeating, drinking, and smoking. Adults are especially prone to “regressing” to such childhood fixation behaviors during times of stress and upset.

Theorists after Freud have offered additional perspectives on infant personality development. Perhaps the most important of these is Melanie Klein's object-relations theory. According to Klein, the inner core of personality stems from the early relationship with the mother. While Freud speculated that the child's fear of a powerful father determines personality, Klein speculated that the child's need for a powerful mother plays a more important role. In other words, the child's fundamental human drive is to be in relationship with others, of whom the mother is usually the first.

Klein affirmed that infants bond to objects rather than people, because the infant cannot fully understand what a person is. An infant's very limited perspective can only process an evolving perception of what a person is.

In object-relations theory, girls adjust better psychosocially than boys. Girls become extensions of the mother; they do not need to separate. Boys, on the other hand, must separate from the mother to become independent. This contrasts with Freud's theory, in which boys develop a stronger superego (conscious) than girls do because boys have a penis and girls do not. Therefore, boys more easily resolve their Oedipal conflict (attraction to the female parent) than girls do their Electra conflict (attraction to the male parent).
Family relationships in infancy and toddlerhood

A baby's first relationships are with family members, to whom the infant expresses a range of emotions (and vice versa). If the social and emotional bonding fails in some way, the child may never develop the trust, self-control, or emotional reasoning necessary to function effectively in the world. The quality of the relationship between child and parents—especially between months 6 and 18—seems to determine the quality of the child's later relationships.

If physical contact between infant and parents plays such a vital role in the emotional health of the infant, and is important to the parents as well, when should such contact begin? Most experts recommend that physical contact occur as soon as possible after delivery. Studies show that babies who receive immediate maternal contact seem to cry less and are happier and more secure than babies who do not. Immediate bonding is optimal, but infants and parents can later make up for an initial separation.

Attachment is the process whereby one individual seeks nearness to another individual. In parent-child interactions, attachment is mutual and reciprocal. The infant looks and smiles at the parents, who look and smile at the infant. Communication between child and parents is indeed basic at this level, but it is also profound.

Psychologist John Bowlby suggested that infants are born “preprogrammed” for certain behaviors that will guarantee bonding with the caregivers. The infant's crying, clinging, smiling, and “cooing” are designed to prompt parental feeding, holding, cuddling, and vocalizing. Parents can help instill trust in their infant as the child forms attachments. Eye contact, touching, and timely feedings are perhaps the most important ways. These, of course, also represent expressions of the love and affection parents have for their children.

Attachment is central to human existence, but so are separation and loss. Ultimately, relationships are interrupted, or they dissolve on their own. Children must learn that nothing human is permanent, though learning this concept is not as easy as it may first sound. According to Bowlby, children who are separated from their parents progress through three stages: protest, despair, and detachment. After first refusing to accept the separation, and then losing hope, the child finally accepts the separation and begins to respond to the attention of new caregivers.

Social deprivation, or the absence of attachment, produces profoundly negative effects on children. For instance, children who have been institutionalized without close or continuous attachments for long periods of time display pathological levels of depression, withdrawal, apathy, and anxiety.
Parenting in infancy and toddlerhood

Cultural and community standards, the social environment, and their children's behavior determine parents' child-raising practices. Hence different parents have different ideas on responding to their children, communicating with them, and placing them into daycare.

Responding (for example, playing, vocalizing, feeding, touching) to an infant's needs is certainly important to the child's psychosocial development. In fact, children who display strong attachment tend to have highly responsive mothers. Does this mean that the caregivers should respond to everything an infant does? Probably not. Children must learn that all needs cannot be completely met all the time. The majority of caregivers respond most of the time to their infants, but not 100 percent of the time. Problems only seem to arise when primary caregivers respond to infants less than 25 percent of the time. The children of “nonresponding” mothers tend to be insecurely attached, which may lead to simultaneous over-dependence upon and rejection of authority figures later in adulthood.

Strong communication between parents and children leads to strong attachment and relationships. Mutuality, or “synchronous” interaction, particularly during the first few months, predicts a secure relationship between parents and infants. Mutual behaviors include taking turns approaching and withdrawing, looking and touching, and “talking” to each other.

With the first few months and years being so critical to children's future psychosocial development, some parents worry about having to place their infants and toddlers in daycare and preschool. Research suggests that children who attend daycare while both parents work are not at a disadvantage regarding development of self, prosocial behavior, or cognitive functioning. Many authorities argue that daycare, coupled with quality time with the parents whenever possible, provides better and earlier socialization than may otherwise occur.

Read more:,articleId-26863.html#ixzz0rOzagNQS
Social and Personality Growth: Age 3–6
During early childhood, children gain some sense of being separate and independent from their parents. According to Erikson, the task of preschoolers is to develop autonomy, or self-direction (ages 1–3), as well as initiative, or enterprise (ages 3–6).
Click here to find out more!

According to Freud, children in the second year of life enter the anal stage of psychosexual development, when parents face many new challenges while toilet training their children. Fixations at this stage give rise to the characteristic personality traits of anal retention (excessive neatness, organization, and withholding) or anal expulsion (messiness and altruism), which fully emerge in adulthood.

Family relationships are critical to the physical, mental, and social health of growing preschoolers. Many aspects of the family, such as parenting techniques, discipline, the number and the birth order of siblings, the family's finances, the family's circumstances, the family's health, and more, contribute to young children's psychosocial development.
Parenting in early childhood

Different parents employ different parenting techniques. Which parents choose to use which techniques depends on cultural and community standards, the situation, and their children's behavior at the time. Parental control involves the degree to which parents are restrictive in their use of parenting techniques, while parental warmth involves the degree to which they are loving, affectionate, and approving in their use of these techniques.


Authoritarian parents demonstrate high parental control and low parental warmth when parenting.

Permissive parents demonstrate high parental warmth and low parental control when parenting.

Indifferent parents demonstrate low parental control and low warmth.

Authoritative parents demonstrate appropriate levels of both parental control and warmth.

The willingness of parents to negotiate common goals with their children is highly desirable. This does not imply, however, that everything within a family system is negotiable. Neither parents nor their children should be “in charge” all of the time. Doing so can lead to unhealthy power struggles within the family. Parental negotiating teaches children that quality relationships can be equitable, or equal in terms of sharing rights, responsibilities, and decision-making. Most negotiating home environments are warm, accommodating, and mutually supportive.
Siblings in early childhood

Siblings form a child's first and foremost peer group. Preschoolers may learn as much or more from their siblings as from their parents. Regardless of age differences, sibling relationships mirror other social relationships, amounting to a type of basic preparation for dealing with people outside of the home. Only brothers and sisters can simultaneously have equal and unequal status in the home, and only they can provide opportunities (whether desired or not) to practice coping with the positives and negatives of human relationships.

Are “only children” (those without siblings) at a developmental disadvantage? No. Research confirms that “onlies” perform just as well as, if not better than, other children on measures of personality, intelligence, and achievement. One explanation is that, like children who are first in the birth order, “only children” may receive the undivided (or nearly undivided) attention of their parents, who in turn have more time to read to them, take them to museums, and encourage them to excel.
Friends and playmates in early childhood

Early family attachments may determine the ease with which children form friendships and other relationships. Children who have loving, stable, and accepting relationships with their parents and siblings are generally more likely to find the same in friends and playmates.

First friends appear at about age 3, though preschoolers may play together long before that age. Much like adults, children tend to develop friends who share common interests, are likable, offer support, and are similar in size and looks.

Childhood friends offer opportunities to learn how to handle anger-provoking situations, to share, to learn values, and to practice more “grown-up” behaviors. Preschoolers who are popular with their peers excel at these activities. Those who are not popular may benefit from adult interventions that encourage them to be less shy and more social.

Read more:,articleId-26864.html#ixzz0rP01ZYC6
Social and Personality Growth: Age 7–11
Erikson's primary developmental task of middle childhood is to attain industry, or the feeling of social competence. Competition (for example, athletics and daredevil activities) and numerous social adjustments (trying to make and keep friends) mark this developmental stage. Successfully developing industry helps the child build self-esteem, which in turn builds the self-confidence necessary to form lasting and effective social relationships.
Click here to find out more!
Self-concept in middle childhood

Most boys and girls in middle childhood develop a positive sense of self-understanding, self-definition, and self-control, especially when their parents, teachers, and friends demonstrate regard for and emotionally support them, and when children themselves feel competent. When lacking in one social area, children in this age group typically find another area in which to excel, which contributes to an overall sense of self-esteem and belonging in the social world. For example, a child who does not like math may take up the piano as a hobby. The more positive experiences children have excelling, the more likely they will develop the self-confidence necessary to confront new social challenges. Self-esteem, self-worth, self-regulation, and self-confidence ultimately form the child's self-concept.
Social cognition in middle childhood

As children grow up, they improve in their use of social cognition, or experiential knowledge and understanding of society and the “rules of life.” They also improve in their use of social inferences, or assumptions about the nature of social relationships and processes, as well as of others' feelings. Peer relationships play a major role in fine-tuning social cognition in middle childhood. Members of a child's peer group typically come from the same race and socio-economic status.

Noncompetitive activities among peers help children to develop quality relationships, while competitive ones help them to discover unique aspects of themselves. Thus, as children in middle childhood interact with their peers, they learn trust and honesty, as well as how to have rewarding social relationships. Eventually, teenagers' social cognition comes to fruition as they form long-term relationships based on trust. Throughout these experiences, children come to grips with the world as a social environment with regulations. In time they become better at predicting what is socially appropriate and workable, as well as what is not.
Family relationships in middle childhood

Even though school-age children spend more time away from home than they did as younger children, their most important relationships remain in the home. These children normally enjoy the company of their parents, grandparents, siblings, and extended family members.

Middle childhood is a transitional stage—a time of sharing power and decision-making with the parents. Yet parents must continue to establish rules and define boundaries because children have only limited experiences upon which to draw when dealing with adult situations and issues.

This period is also a time of increased responsibility for children. In addition to allowing increased freedom (such as going unsupervised to the Saturday afternoon movies with their peers), parents may assign their children additional household chores (watching their younger siblings after school while the parents work). The majority of school-age children appreciate their parents' acceptance of their more “adult-like” role in the family.

Discipline, while not necessarily synonymous with punishment, remains an issue in middle childhood. The question, which has been debated in social science circles for decades, becomes one of discipline's role in teaching children values, morals, integrity, and self-control. Most authorities today agree that punishment is probably of less value than positive reinforcement, or rewarding acceptable behaviors. Some parents choose to use both discipline and positive reinforcement techniques with their children.

Most families today require two incomes to make ends meet. Consequently, some children express negative feelings about being “latchkey kids” while both parents work. Children may question why their parents “choose” to spend so little time with them. Or they may become resentful at not being greeted after school by one or both parents. Straightforward and honest communication between parents and children can do much to alleviate any concerns or upset that may arise. Parents can remind their children that the quality of time spent together is more important than the quantity of time.
Friends and playmates in middle childhood

Friendships, especially same-gender ones, are prevalent during middle childhood. Friends serve as classmates, comrades, fellow adventurers, confidantes, and “sounding boards.” They also help each other to develop self-esteem and a sense of competency in the social world. As boys and girls progress through middle childhood, their peer relationships take on greater importance. This means that older children likely enjoy group activities such as skating, riding bikes, playing house, and building forts. This also means popularity and conformity become the focus of intense concern and even worry.

Similar to same-age peers, friendships in middle childhood are mostly based on similarity and may or may not be affected by the awareness of racial or other differences. Intolerance for those who are dissimilar leads to prejudice, or negative perceptions about those who are different. Although peers and friends may reinforce prejudicial stereotypes, many children eventually become less rigid in their thinking about children from different backgrounds.

Many sociologists consider peer pressure a negative consequence of peer friendships and relationships. Those children most susceptible to peer pressure typically have low self-esteem. They in turn adopt the group's “norms” as their own in an attempt to enhance their self-esteem. When children cannot resist the influence of their peers, particularly in ambiguous situations, they may begin smoking, drinking, stealing, or lying if their peers encourage such behaviors.

Read more:,articleId-26865.html#ixzz0rP0KrXmR
Social and Personality Growth: Age 12–19
Adolescence is the period of transition between childhood and adulthood. Social scientists have traditionally viewed adolescence as a time of psychosocial “storm and stress”—of bearing the burdens of wanting to be an adult long before becoming one. Sociologists today are more likely to view adolescence as a positive time of opportunities and growth, as most adolescents traverse this transition without serious problems or rifts with parents.

Freud called the period of psychosexual development beginning with puberty the genital stage. During this stage sexual development reaches adult maturity, resulting in a healthy ability to love and work if the individual has successfully progressed through previous stages. Because early pioneers in development concerned themselves only with childhood, Freud explained that the genital stage encompasses all of adulthood, and described no special difference between adolescent and adult years.

In contrast, Erikson noted that the chief conflict facing the adolescent at this stage is one of identity versus identity confusion. Hence, the adolescent is posed with the psychosocial task of developing individuality. To form an identity, adolescents must define personal roles in society and integrate the various dimensions of their personalities into a sensible whole. They must wrestle with such issues as selecting a career, college, religious system, and political party.

Researchers Carol Gilligan and Deborah Tannen have found differences in the ways in which males and females achieve identity. Gilligan has noted that females seek intimate relationships, while males seek independence and achievement. Deborah Tannen has explained these differences as being due, at least in part, to the dissimilar ways in which parents socialize males and females.

The hormonal changes of puberty affect the emotions of adolescents. Along with emotional and sexual fluctuations comes the need for adolescents to question authority and societal values, as well as to test limits within existing relationships. These needs become readily apparent within the family system, where adolescents' desire for independence from parents and siblings can cause a great deal of conflict and tension at home.

Societal mores and expectations during adolescence restrain the curiosity so characteristic of young children, even though peer pressure to try new things and behave in certain ways is also very powerful. Additionally, teenagers experience a growing desire for personal responsibility and independence from their parents, along with an ever-growing, irresistible interest in sexuality.

Read more:,articleId-26866.html#ixzz0rP0YqTOm
Social Groups
Social groups and organizations comprise a basic part of virtually every arena of modern life. Thus, in the last 50 years or so, sociologists have taken a special interest in studying these scientific phenomena from a scientific point of view.

A social group is a collection of people who interact with each other and share similar characteristics and a sense of unity. A social category is a collection of people who do not interact but who share similar characteristics. For example, women, men, the elderly, and high school students all constitute social categories. A social category can become a social group when the members in the category interact with each other and identify themselves as members of the group. In contrast, a social aggregate is a collection of people who are in the same place, but who do not interact or share characteristics.

Psychologists Muzafer and Carolyn Sherif, in a classic experiment in the 1950s, divided a group of 12-year-old white, middle-class boys at a summer camp
into the “Eagles” and the “Rattlers.” At first, when the boys did not know one another, they formed a common social category as summer campers. But as time passed and they began to consider themselves to be either Eagles or Rattlers, these 12-year-old boys formed two distinct social groups.

Read more:,articleId-26868.html#ixzz0rP0tTOlz
In-groups, out-groups, and reference groups
In the Sherifs' experiment, the youngsters also erected artificial boundaries between themselves. They formed in-groups (to which loyalty is expressed) and out-groups (to which antagonism is expressed).

To some extent every social group creates boundaries between itself and other groups, but a cohesive in-group typically has three characteristics:


Members use titles, external symbols, and dress to distinguish themselves from the out-group.

Members tend to clash or compete with members of the out-group. This competition with the other group can also strengthen the unity within each group.

Members apply positive stereotypes to their in-group and negative stereotypes to the out-group.

In the beginning, the Eagles and Rattlers were friendly, but soon their games evolved into intense competitions. The two groups began to call each other names, and they raided each other's cabins, hazed one another, and started fights. In other words, loyalty to the in-group led to antagonism and aggression toward the out-group, including fierce competitions for the same resources. Later in the same experiment, though, Sherif had the boys work together to solve mutual problems. When they cooperated with one another, the Eagles and Rattlers became less divided, hostile, and competitive.

People may form opinions or judge their own behaviors against those of a reference group (a group used as a standard for self-appraisals). Parishioners at a particular church, for instance, may evaluate themselves by the standards of a denomination, and then feel good about adhering to those standards. Such positive self-evaluation reflects the normative effect that a reference group has on its own members, as well as those who compare themselves to the group. Still, reference groups can have a comparison effect on self-evaluations. If most parishioners shine in their spiritual accomplishments, then the others will probably compare themselves to them. Consequently, the “not-so-spiritual” parishioners may form a negative self-appraisal for not feeling “up to par.” Thus, reference groups can exert a powerful influence on behavior and attitudes.

Read more:,articleId-26868.html#ixzz0rP1AcbXV
Primary and secondary groups
Groups play a basic role in the development of the social nature and ideals of people. Primary groups are those in which individuals intimately interact and cooperate over a long period of time. Examples of primary groups are families, friends, peers, neighbors, classmates, sororities, fraternities, and church members. These groups are marked by primary relationships in which communication is informal. Members of primary groups have strong emotional ties. They also relate to one another as whole and unique individuals.

In contrast, secondary groups are those in which individuals do not interact much. Members of secondary groups are less personal or emotional than those of primary groups. These groups are marked by secondary relationships in which communication is formal. Members of secondary groups may not know each other or have much face-to-face interaction. They tend to relate to others only in particular roles and for practical reasons. An example of a secondary relationship is that of a stockbroker and her clients. The stockbroker likely relates to her clients in terms of business only. She probably will not socialize with her clients or hug them.

Primary relationships are most common in small and traditional societies, while secondary relationships are the norm in large and industrial societies. Because secondary relationships often result in loneliness and isolation, some members of society may attempt to create primary relationships through singles' groups, dating services, church groups, and communes, to name a few. This does not mean, however, that secondary relationships are bad. For most Americans, time and other commitments limit the number of possible primary relationships. Further, acquaintances and friendships can easily spring forth from secondary relationships.

Read more:,articleId-26868.html#ixzz0rP1NWSj2
Small groups
A group's size can also determine how its members behave and relate. A small group is small enough to allow all of its members to directly interact. Examples of small groups include families, friends, discussion groups, seminar classes, dinner parties, and athletic teams. People are more likely to experience primary relationships in small group settings than in large settings.

The smallest of small groups is a dyad consisting of two people. A dyad is perhaps the most cohesive of all groups because of its potential for very close and intense interactions. It also runs the risk, though, of splitting up. A triad is a group consisting of three persons. A triad does not tend to be as cohesive and personal as a dyad.

The more people who join a group, the less personal and intimate that group becomes. In other words, as a group increases in size, its members participate and cooperate less, and are more likely to be dissatisfied. A larger group's members may even be inhibited, for example, from publicly helping out victims in an emergency. In this case, people may feel that because so many others are available to help, responsibility to help is shifted to others. Similarly, as a group increases in size, its members are more likely to engage in social loafing, in which people work less because they expect others to take over their tasks.

Read more:,articleId-26868.html#ixzz0rP1ZhWHx
Leadership and conformity
Sociologists have been especially interested in two forms of group behavior: conformity and leadership.

The pressure to conform within small groups can be quite powerful. Many people go along with the majority regardless of the consequences or their personal opinions. Nothing makes this phenomenon more apparent than Solomon Asch's classic experiments from the 1950s and 1960s.

Asch assembled several groups of student volunteers and then asked the subjects which of the three lines on a card was as long as the line on another card. Each of the student groups had only one actual subject; the others were Asch's secret accomplices, whom he had instructed to provide the same, though absurdly wrong, answer. The experimenter found that almost one-third of the subjects changed their minds and accepted the majority's incorrect answer.

The pressure to conform is even stronger among people who are not strangers. During group-think, members of a cohesive group endorse a single explanation or answer, usually at the expense of ignoring reality. The group does not tolerate dissenting opinions, seeing them as signs of disloyalty to the group. So members with doubts and alternate ideas do not speak out or contradict the leader of the group, especially when the leader is strong-willed. Group-think decisions often prove disastrous, as when President Kennedy and his top advisors endorsed the CIA's decision to invade Cuba. In short, collective decisions tend to be more effective when members disagree while considering additional possibilities.

Two types of leaders normally emerge from small groups. Expressive leaders are affiliation motivated. That is, they maintain warm, friendly relationships. They show concern for members' feelings and group cohesion and harmony, and they work to ensure that everyone stays satisfied and happy. Expressive leaders tend to prefer a cooperative style of management. Instrumental leaders, on the other hand, are achievement motivated. That is, they are interested in achieving goals. These leaders tend to prefer a directive style of management. Hence, they often make good managers because they “get the job done.” However, they can annoy and irritate those under their supervision.

Read more:,articleId-26868.html#ixzz0rP1oIAS7
Social Organizations
Secondary groups are diverse. Some are large and permanent; others are small and temporary. Some are simple; others are complex. Some have written rules; others do not.
Learn more about Nashville Auto-Diesel College

Colleges, businesses, political parties, the military, universities, and hospitals are all examples of formal organizations, which are secondary groups that have goal-directed agendas and activities. In contrast to official organizations, the informal relations among workers comprise informal organizations. Studies have clearly shown that quality informal relations improve satisfaction on the job and increase workers' productivity. However, professionals seem to place more importance on their relations with their co-workers than blue collar workers do, perhaps because professionals' jobs require more interaction with co-workers.

Goals help to define organizations and what they do, as well as provide standards for measuring efficiency, performance
, and success in meeting specific objectives. Whereas most organizations cease to exist if they do not attain their goals, others may thrive because of the continuing need to meet their goals. For example, social service agencies continue to function because they never run out of clients.

Organizations use rational planning to achieve their goals. They identify needs, generate alternatives, decide on goals, figure the most effective means to achieve the goals, decide who is best capable of achieving the goals, and then implement a specific plan of action. All of this usually requires strict adherence to policies, which can make large organizations seem businesslike and removed.

Read more:,articleId-26869.html#ixzz0rP23vjvN
Organizational Models
Like groups in general, formal organizations are everywhere. Thus, sociologists have been quite interested in studying how organizations work. To learn more about how organizations operate effectively, sociologists develop organizational models. Some models describe the actual characteristics of organizations, while others describe the ideal characteristics for achieving their goals. To date, no single model has adequately described the fully complex nature of organizations in general.

Read more:,articleId-26870.html#ixzz0rP2K56It
Bureaucratic organizations
Max Weber noted that modern Western society has necessitated a certain type of formal organization: bureaucracy. According to Weber, who believed that bureaucracy is the most efficient form of organization possible, the essential characteristics of a bureaucracy include


Written regulations and rules, which maximize bureaucratic operations and efficiency.

A highly defined hierarchy of authority, in which those higher in the hierarchy give orders to those lower in the hierarchy. Those who work in bureaucratic settings are called bureaucrats.

Bureaucratic authority resting in various offices or positions, not in individuals.

Employees being hired based on technical know-how and performance on entry examinations.

Formal and impersonal record keeping and communications within the organization.

A paid administrative staff.

Although a bureaucracy itself may be specialized and impersonal, bureaucrats still retain their humanity. Within any bureaucracy, informal relationships invariably form, which can increase worker satisfaction, but only to a point. Informal groups can become disruptive to the efficiency of the bureaucracy.

Critics of Weber note that bureaucracy can also promote inefficiency. A bureaucracy can only formulate rules based on what it knows or expects. Sometimes novel situations or extenuating circumstances arise that the rules do not cover. When the unusual happens, rules may not be of much help.

Read more:,articleId-26870.html#ixzz0rP2WAWHi
Collectivist organizations
Unlike Weber, Karl Marx argued that capitalists use bureaucracies to exploit the working class. Marx predicted that bureaucracies would eventually disappear in a communist (classless) society, and that collectivist organizations, in which supervisors and workers function as equals for equal wages, would replace the bureaucracies. A variation of the collective organizational model has been tried in China, but with limited success. Critics note that collectivist organizations do not work because “leader” and “followers” inevitably emerge when groups of people are involved.

Read more:,articleId-26870.html#ixzz0rP2mAhc6
Pros and Cons of Bureaucracy
Even though many Americans dislike bureaucracy, this organizational model prevails today. Whether or not they wish to admit it, most Americans either work in bureaucratic settings, or at least deal with them daily in schools, hospitals, government, and so forth. Hence, taking a closer look at the pros and cons of bureaucracy is important.

Read more:,articleId-26871.html#ixzz0rP2yUI6p
Pros of bureaucracy
Although the vices of bureaucracy are evident (and are discussed in the next section), this form of organization is not totally bad. In other words, benefits to the proverbial “red tape” associated with bureaucracy do exist. For example, bureaucratic regulations and rules help ensure that the Food and Drug Administration (FDA) takes appropriate precautions to safeguard the health of Americans when it is in the process of approving a new medication. And the red tape documents
the process so that, if problems arise, data exists for analysis and correction.

Likewise, the impersonality of bureaucracies can have benefits. For example, an applicant must submit a great deal of paperwork to obtain a government student loan. However, this lengthy—and often frustrating—process promotes equal treatment of all applicants, meaning that everyone has a fair chance to gain access to funding. Bureaucracy also discourages favoritism, meaning that in a well-run organization, friendships and political clout should have no effect on access to funding.

Bureaucracies may have positive effects on employees. Whereas the stereotype of bureaucracies is one of suppressed creativity and extinguished imagination, this is not the case. Social research shows that many employees intellectually thrive in bureaucratic environments. According to this research, bureaucrats have higher levels of education, intellectual activity, personal responsibility, self-direction, and open-mindedness, when compared to non-bureaucrats.

Another benefit of bureaucracies for employees is job security, such as a steady salary, and other perks, like insurance, medical and disability coverage, and a retirement pension.

Read more:,articleId-26871.html#ixzz0rP3AOUh6
Cons of bureaucracy
Americans rarely have anything good to say about bureaucracies, and their complaints may hold some truth. As noted previously, bureaucratic regulations and rules are not very helpful when unexpected situations arise. Bureaucratic authority is notoriously undemocratic, and blind adherence to rules may inhibit the exact actions necessary to achieve organizational goals.

Concerning this last point, one of bureaucracy's least-appreciated features is its proneness to creating “paper trails” and piles of rules. Governmental bureaucracies are especially known for this. Critics of bureaucracy argue that mountains of paper and rules only slow an organization's capacity to achieve stated goals. They also note that governmental red tape costs taxpayers both time and money. Parkinson's Law and the Peter Principle have been formulated to explain how bureaucracies become dysfunctional.

Parkinson's Law, named after historian C. Northcote Parkinson, states that work creates more work, usually to the point of filling the time available for its completion. That is, Parkinson believed that bureaucracies always grow—typically 6 percent annually. Managers wish to appear busy, so they increase their workload by creating paper and rules, filling out evaluations and forms, and filing. Then they hire more assistants, who in turn require more managerial time for supervision. Moreover, many bureaucratic budgets rely on the “use it or lose it” principle, meaning the current year's expenditures determines the following year's budget. This provides a deep incentive to spend (even waste) as much money as possible to guarantee an ever-increasing budget. Parkinson's views remain consistent with those of conflict theorists, who hold that bureaucratic growth serves only the managers, who in turn use their increasing power to control the workers.

Approaching bureaucracies from yet another angle, the Peter Principle, named after sociologist Laurence Peter, states that employees in a bureaucracy are promoted to the level of their incompetence. In other words, competent managers
continually receive promotions until they attain a position in which they are incompetent. And they usually remain in this position until they retire or die. The bureaucracy can only continue because competent employees are constantly working their way up the hierarchical ladder.

Parkinson's Law and the Peter Principle, while fascinating social phenomena, are based on stereotypes and anecdotes rather than on rigorous social science research.

Read more:,articleId-26871.html#ixzz0rP3M85jJ
Theories of Deviance
Deviance is any behavior that violates social norms, and is usually of sufficient severity to warrant disapproval from the majority of society. Deviance can be criminal or non-criminal. The sociological discipline that deals with crime (behavior that violates laws) is criminology (also known as criminal justice). Today, Americans consider such activities as alcoholism, excessive gambling, being nude in public places, playing with fire, stealing, lying, refusing to bathe, purchasing the services of prostitutes, and cross-dressing—to name only a few—as deviant. People who engage in deviant behavior are referred to as deviants.
The concept of deviance is complex because norms vary considerably across groups, times, and places. In other words, what one group may consider acceptable, another may consider deviant. For example, in some parts of Indonesia, Malaysia, and Muslim Africa, women are circumcised. Termed clitoridectomy and infibulation, this process involves cutting off the clitoris and/or sewing shut the labia — usually without any anesthesia. In America, the thought of female circumcision, or female genital mutilation as it is known in the United States, is unthinkable; female genital mutilation, usually done in unsanitary conditions that often lead to infections, is done as a blatantly oppressive tactic to prevent women from having sexual pleasure.

A number of theories related to deviance and criminology have emerged within the past 50 years or so. Four of the most well-known follow.

Read more:,articleId-26873.html#ixzz0rP3ffYI6
Differential-association theory
Edwin Sutherland coined the phrase differential association to address the issue of how people learn deviance. According to this theory, the environment plays a major role in deciding which norms people learn to violate. Specifically, people within a particular reference group provide norms of conformity and deviance, and thus heavily influence the way other people look at the world, including how they react. People also learn their norms from various socializing agents—parents, teachers, ministers, family, friends, co-workers, and the media. In short, people learn criminal behavior, like other behaviors, from their interactions with others, especially in intimate groups.

The differential-association theory applies to many types of deviant behavior. For example, juvenile gangs provide an environment in which young people learn to become criminals. These gangs define themselves as countercultural and glorify violence, retaliation, and crime as means to achieving social status. Gang members learn to be deviant as they embrace and conform to their gang's norms.

Differential-association theory has contributed to the field of criminology in its focus on the developmental nature of criminality. People learn deviance from the people with whom they associate. Critics of the differential-association theory, on the other hand, claim the vagueness of the theory's terminology does not lend itself to social science research methods or empirical validation.

Read more:,articleId-26873.html#ixzz0rP3qvipC
Anomie theory
Anomie refers to the confusion that arises when social norms conflict or don't even exist. In the 1960s, Robert Merton used the term to describe the differences between socially accepted goals and the availability of means to achieve those goals. Merton stressed, for instance, that attaining wealth is a major goal of Americans, but not all Americans possess the means to do this, especially members of minority and disadvantaged groups. Those who find the “road to riches” closed to them experience anomie, because an obstacle has thwarted their pursuit of a socially approved goal. When this happens, these individuals may employ deviant behaviors to attain their goals, retaliate against society, or merely “make a point.”

The primary contribution of anomie theory is its ability to explain many forms of deviance. The theory is also sociological in its emphasis on the role of social forces in creating deviance. On the negative side, anomie theory has been criticized for its generality. Critics note the theory's lack of statements concerning the process of learning deviance, including the internal motivators for deviance. Like differential association theory, anomie theory does not lend itself to precise scientific study.

Read more:,articleId-26873.html#ixzz0rP41Oryi
Control theory
According to Walter Reckless's control theory, both inner and outer controls work against deviant tendencies. People may want—at least some of the time—to act in deviant ways, but most do not. They have various restraints: internal controls, such as conscience, values, integrity, morality, and the desire to be a “good person”; and outer controls, such as police, family, friends, and religious authorities. Travis Hirschi noted that these inner and outer restraints form a person's self-control, which prevents acting against social norms. The key to developing self-control is proper socialization, especially early in childhood. Children who lack this self-control, then, may grow up to commit crimes and other deviant behaviors.

Whereas theory also suggests that people society labels as “criminals” are probably members of subordinate groups, critics argue that this oversimplifies the situation. As examples, they cite wealthy and powerful businesspeople, politicians, and others who commit crimes. Critics also argue that conflict theory does little to explain the causes of deviance. Proponents counter, however, by asserting that the theory does not attempt to delve into etiologies. Instead, the theory does what it claims to do: It discusses the relationships between socialization, social controls, and behavior.

Read more:,articleId-26873.html#ixzz0rP4I3cx8
Labeling theory
A type of symbolic interaction, labeling theory concerns the meanings people derive from one another's labels, symbols, actions, and reactions. This theory holds that behaviors are deviant only when society labels them as deviant. As such, conforming members of society, who interpret certain behaviors as deviant and then attach this label to individuals, determine the distinction between deviance and non-deviance. Labeling theory questions who applies what label to whom, why they do this, and what happens as a result of this labeling.

Powerful individuals within society—politicians, judges, police officers, medical doctors, and so forth—typically impose the most significant labels. Labeled persons may include drug addicts, alcoholics, criminals, delinquents, prostitutes, sex offenders, retarded people, and psychiatric patients, to mention a few. The consequences of being labeled as deviant can be far-reaching. Social research indicates that those who have negative labels usually have lower self-images, are more likely to reject themselves, and may even act more deviantly as a result of the label. Unfortunately, people who accept the labeling of others—be it correct or incorrect—have a difficult time changing their opinions of the labeled person, even in light of evidence to the contrary.

William Chambliss in 1973 conducted a classic study into the effects of labeling. His two groups of white, male, high-school students were both frequently involved in delinquent acts of theft, vandalism, drinking, and truancy. The police never arrested the members of one group, which Chambliss labeled the “Saints,” but the police did have frequent run-ins with members of the other group, which he labeled the “Roughnecks.” The boys in the Saints came from respectable families, had good reputations and grades in school, and were careful not to get caught when breaking the law. By being polite, cordial, and apologetic whenever confronted by the police, the Saints escaped labeling themselves as “deviants.” In contrast, the Roughnecks came from families of lower socioeconomic status, had poor reputations and grades in school, and were not careful about being caught when breaking the law. By being hostile and insolent whenever confronted by the police, the Roughnecks were easily labeled by others and themselves as “deviants.” In other words, while both groups committed crimes, the Saints were perceived to be “good” because of their polite behavior (which was attributed to their upper-class backgrounds) and the Roughnecks were seen as “bad” because of their insolent behavior (which was attributed to their lower-class backgrounds). As a result, the police always took action against the Roughnecks, but never against the Saints.

Proponents of labeling theory support the theory's emphasis on the role that the attitudes and reactions of others, not deviant acts per se, have on the development of deviance. Critics of labeling theory indicate that the theory only applies to a small number of deviants, because such people are actually caught and labeled as deviants. Critics also argue that the concepts in the theory are unclear and thus difficult to test scientifically.

Read more:,articleId-26873.html#ixzz0rP4Uul9y
Defining Crime
Any discussion of deviance remains incomplete without a discussion of crime, which is any act that violates written criminal law. Society sees most crimes, such as robbery, assault, battery, rape, murder, burglary, and embezzlement, as deviant. But some crimes, such as those committed in violation of laws against selling merchandise on Sundays, are not deviant at all. Moreover, not all deviant acts are criminal. For example, a person who hears voices that are not there is deviant but not criminal.
Click here to find out more!

A society's criminal justice system punishes crimes. Punishment becomes necessary when criminal acts are so disruptive as to interfere with society's normal functioning.
Limitations of criminal statistics

The FBI's annual Uniform Crime Report contains the official crime statistics drawn throughout the United States. Significant biases exist in the reporting and collecting of crime data, and problems occur when people interpret these criminal statistics. Some of these biases include the following:


Many crimes in the United States go unreported, which makes the validity of crime statistics limited at best.

Victims are often unwilling to cooperate with authorities.

Complaints do not always translate into reported crimes. That is, while some victims of crime may complain to police, this does not mean that their complaint ends up reported in the Uniform Crime Report.

Many people do not know how properly to interpret social science statistical data, including criminal statistics. For example, one very common error is attributing cause-and-effect to correlational data.

White-collar crime, committed by high-status individuals during the course of business, tends not to appear in the Uniform Crime Report. Typical white-collar crimes include embezzlement, bribery, criminal price-fixing, insurance fraud, Medicare theft, and so forth.

Some police and government officials exaggerate or downplay criminal statistics for political purposes. An incumbent politician may report “less crime” statistics in a re-election campaign, while a social service agency may report “more crime” statistics in a proposal for funding.

Types of crime

The types of crimes committed are as varied as the types of criminals who commit them. Most crimes fall into one of two categories— crimes against people or crimes against property.

Read more:,articleId-26874.html#ixzz0rP4njbXK
Crimes against People
The category of crimes against people includes such crimes as murder, rape, assault, child abuse, and sexual harassment. Violent crimes reported to the police take place on average once every 20 to 30 seconds in the United States. Thus, the chances of being the victim of some form of violent crime in this country are disturbingly high.
Murder or homicide

Most Americans fear murder, which happens to about one in every 10,000 inhabitants. Murders usually occur in the midst of everyday routines and activities. In fact, most murders occur following some degree of social interaction between victim and murderer. Murders generally happen within the context of family or other interpersonal relations, and most victims know their murderer. In some cases, the victim even unintentionally prompts the murderer to attack, by making verbal threats, striking the first blow, or trying to use a weapon, which is called victim-precipitated murder. The majority of killers are not psychotic or mentally deranged. While sensational murders and murderers, like the “Son of Sam,” receive much publicity, they probably make up a very small percentage of the total.

Studies indicate several interesting facts about murder and murderers:


Men are much more likely than women to kill and/or be killed.

Murders are most likely to occur in large urban areas.

Murders are more common during the months of December, July, and August, and are also more likely to occur on weekend nights and early mornings.

Alcohol plays a role in nearly two-thirds of all murders.

The question of the role of capital punishment in deterring murder intrigues sociologists and stirs a great deal of debate. The social science literature generally indicates that no relationship exists between capital punishment and homicide rates, although some sociologists may prefer to describe the literature on the topic as inconclusive.
Rape and personal assault

Laws that protect people from unwanted sexual behaviors are appropriate and necessary, and this is certainly the case with rape—the forced sexual violation of one person (usually female) by another (usually male). To some, rape is a crime of violence and aggression, not one of sex. To others, it is a crime of both violence and sex.

Rape is hardly a phenomenon of recent origin. Because men have traditionally treated women more like property than as individuals, societies of the past viewed abuses against women less as crimes against them than as crimes against their fathers, husbands, or owners. This thinking has begun to change in recent decades. For instance, some state courts have finally ruled that a woman can charge her husband with marital rape if he forces her to have sexual relations.

Although bringing charges against an attacker may now be easier for rape victims, winning a conviction in court is still difficult.

While rape crosses all lines—racial, socioeconomic, age, and marital status—single, white women in their teens and 20s are at the greatest risk of being victims. Most rapists are males between the ages of 18 and 44.

In recent years, the incidence of rape has increased, but authorities believe this is due to more women reporting the crime than to the actual number of rapes increasing. Still, many never report the assault or file charges. Why?


They may fear being victimized again, this time in the courts.

They may dread the social stigma of being a rape victim, or the publicity that accompanies a rape report.

They may realize that the chances of winning a conviction are small.

They may feel emotionally traumatized and drained.

They may feel dirty, guilty, and demoralized, coming to the erroneous conclusion that their behavior or dress somehow inadvertently indicated a desire to be forced into sex.

Because of this ambivalence over reporting rape, authorities may never know the exact number of rapes in the United States. One study estimates that between 15 and 25 percent of women in the United States have been or will be victims of rape during their lifetimes. These figures may or may not represent the actual number of rapes taking place in this country, but they are nonetheless alarming.

Six basic types of rape exist: outsider rape, gang rape, acquaintance rape, date rape, marital rape, and statutory rape. Many people mistakenly believe outsider rape (or stranger rape), which is an attack from someone entirely unknown to the victim, is the most common type of rape, probably because it is the one victims usually bring to the attention of authorities. Outsider rape is frequently the most violent rape of all. Victims frequently suffer severe and even fatal injury, often through the use of knives, guns, and other weapons. In many cases, perpetrators of outsider rape pick their victims carefully, and plan the best times and places for the assault. Gang rape occurs when two or more perpetrators—either strangers or acquaintances—commit rape on the same individual.

The perpetrator of acquaintance rape is someone the victim knows. Studies have found acquaintance rape to occur more frequently than any other kind. As many as 95 percent of college female victims may know their attackers. A specific type of acquaintance rape is date rape, in which the perpetrator is a dating partner. Date rape can occur at any time during courtship, from the first date to long after a stable relationship has formed. It often happens when one or both individuals have been drinking alcohol or taking drugs, and when one individual takes “no” to sexual pressure from the other to mean “yes” or “maybe.”

Some states still do not recognize marital rape unless it occurs between separated marital partners or results in serious injury. One research study found that as many as 14 percent of married women report having been raped by their husbands. Although abuse and cruelty are frequent during marital rape, prosecutions are rare.

Marital rape is one aspect of a larger problem— spouse abuse. One estimate claims that nearly one million women in the United States each year seek medical treatment for injuries sustained during beatings by their husbands.

Rape is not exclusively a crime committed against women. Men, too, are raped, generally by heterosexual men, but also occasionally by women. Extremely common in prison settings, male rape by other males is rarely reported or prosecuted in regular society. Male rape is usually a display of power and dominance over others, such as occurs among prison inmates.
Child abuse

Child abuse is the intentional inflicting of pain, injury, and harm onto a child. Child abuse also includes emotional, psychological, and sexual abuse, including humiliation, embarrassment, rejection, coldness, lack of attention, neglect, isolation, and terrorization.

Adults who were physically and emotionally abused as children frequently suffer from deep feelings of anxiety, shame, guilt, and betrayal. If the experience was especially traumatic and emotionally painful (as it often is), victims may repress memories of the abuse and suffer deep, unexplainable depression as adults. Child abuse almost always interferes with later relationships. Researchers have also noted a wide range of emotional dysfunction both during, soon after, and long after physical abuse, including anxiety attacks, suicidal tendencies, angry outbursts, withdrawal, fear, and depression. Another decidedly negative effect of child abuse, a strong intergenerational pattern, is also worth noting. In other words, many abusers were themselves victims of abuse as children.

In spite of the range and intensity of the after-effects of child abuse, many victims manage to accept the abuse as a regrettable event, but one that they can also leave behind.

One emotionally damaging form of child abuse is child sexual abuse. Also known as child molestation, child sexual abuse occurs when a teenager or adult entices or forces a child to participate in sexual activity. This activity constitutes perhaps the worst means of exploiting children imaginable. Ranging from simple touching to bodily penetration, child sexual abuse is culturally forbidden in most parts of the world, and is illegal everywhere in the United States. Experts estimate that as many as 25 percent of children in the United States undergo sexual abuse each year. Adults who are sexually attracted to children are known as pedophiles.

Every state in the country also has laws against a specific type of child abuse known as incest, which is sexual activity between closely related persons of any age. Child sexual abuse is incest when the abuser is a relative, whether or not the relative is blood-related, which explains why stepparents can be arrested for incest when molesting their stepchildren. Not all states have laws forbidding sexual activity among first cousins.

Contrary to popular misconception, incest occurs less frequently than abuse from a person outside the family, such as a family friend, teacher, minister, youth director, or scoutmaster. The perpetrators of incest are typically men; their victims, typically girls in their middle childhood years.
Sexual harassment

Reported more frequently today than ever before, sexual harassment is legally defined as unwanted sexual advances, suggestions, comments, or gestures—usually ongoing in nature and involving a supervisor-supervisee relationship (that is, a situation involving unequal power). Sexual harassment takes many forms, such as:


Verbal harassment or abuse.

Sexist remarks about a person's body, clothing, or sexual activities.

Unwanted touching, pinching, or patting.

Leering or ogling at a person's body.

Subtle or overt pressure for sexual activity.

Demanding sexual favors accompanied by implied or overt threats concerning one's job or student status.

Constant brushing against a person's body.

Physical assault.

Women are most often the objects of sexual harassment, especially in the workplace. One review of the literature found that 42 percent of women reported having experienced some form of sexual harassment at work, compared to 15 percent of men. The practice is so pervasive in work sites throughout the United States that many women have come to expect it, and the vast majority (over 95 percent) never file a formal complaint. Others simply find another job.

Sexual harassment is not confined to the workplace. For example:


College students may come under sexual pressure from their instructors, with grades, graduation, and letters of recommendation used as threats or bribes. One survey of university undergraduates found that 29 percent of women and 20 percent of men reported having experienced some form of sexual harassment from instructors.

Patients of physicians and psychologists may also become the object of unwanted sexual attention from their providers.

Lawyers may coerce clients into sex in exchange for legal services.

Even pastors may convince parishioners that sex with the clergy is a viable road to finding spirituality and true peace of mind.

Apparently, no profession, institution, or individual is immune from sexual harassment, as situations of unequal power can exist anywhere. Fortunately, most companies and universities now have policies and reporting structures in place to deal with complaints of sexual harassment, and some states have passed laws prohibiting sexual activity, for example, between therapists and their clients.

The effects of sexual harassment can be numerous and long-lasting. With good jobs at a premium, the possible financial effects of resisting sexual harassment on the job—demotions, pay reductions, and even termination—can be devastating. The psychological effects of sexual pressure on the job, at school, in the doctor's office, or wherever—anxiety, fear, depression, repressed anger, and humiliation—can be equally devastating. Guilt and shame are also common because victims of sexual harassment, similar to victims of rape, may somehow feel responsible. They fear that their dress and/or mannerisms may be bringing on the unwelcome sexual attention.

Read more:,articleId-26875.html#ixzz0rP5Em84L
Crimes against Property
Of the almost 1.5 million Americans under some form of correctional supervision, most are there for offenses against someone else's property. In other words, property crimes are much more common than those against persons are. Property crimes reported to the police take place on average once every 2 to 3 seconds in the United States. Every year about one in 20 Americans falls victim to a property crime.
Click here to find out more!
Computer crime

An emerging type of crime involves using computers to “hack” (break into) military, educational, medical, and business computers. Although software exists to thwart sophisticated hackers, it provides only limited protection for large computer systems. Modest estimates state that known computer crimes total some $300 million each year; the amounts could be much higher. Laws dealing with computer crimes are in their infancy.
Victimless crime

In victimless crime, all parties consent to the exchange of illegal goods and/or services. In some cases, victims may exist, but not usually. The list of victimless crimes includes illicit drug use, gambling in most areas of the country, the use of illegal sexual materials, public nudity, public drunkenness, vagrancy, loansharking, and prostitution.

A prostitute is a person who has sex indiscriminately for pay. Prosecution of prostitutes has been inconsistent, primarily because society has trouble making up its mind about prostitution. The vast majority of Americans disapprove of prostitution—61 percent of males and 83 percent of females believe the practice is “always wrong” or “almost always wrong.”

The sheer number of sellers and buyers creates a major problem in trying to arrest and prosecute prostitutes. One conservative estimate says that, at any one time, 5 million American females are engaging in some form of prostitution. Another problem is the criminal justice system's bias toward arresting prostitutes more often than their buyers. Still another is the bureaucratic nature of the criminal justice system, which is excessively time-consuming and expensive. Even if arrested, most prostitutes are poor and cannot afford legal representation, so the system has to cover the costs. The entire ordeal frustrates everyone involved. Rather than attempting to arrest and prosecute prostitutes, some communities prefer to focus their efforts on ridding themselves of overt prostitution, usually by preventing prostitutes from loitering and soliciting in public.

Some prostitutes have organized into active unions with the purpose of promoting prostitutes' civil rights by legalizing or decriminalizing their profession. Some proponents argue that legalizing prostitution would save enforcement dollars, eliminate the need for pimps, bring in license fees and taxes, and keep prostitutes disease-free through regular medical examinations. Others argue that decriminalization would allow people to have control over their work, as well as protect the privacy of prostitutes and their customers.
Organized crime

Organized crime refers to groups and organizations dedicated solely to criminal activity. Historically, leaders of organized crime, or “crime families,” have come from different ethnic groups, such as the Italian-American sectors of large U.S. cities.

Organized crime activities are of three basic types:


Legal activities and businesses, such as restaurants.

Illegal activities, such as importing and selling narcotics, gambling, and running prostitution rings.

Racketeering, or the systematic extortion of funds for purposes of “protection.”

As one might expect, organized crime can be wildly profitable. Current estimates place organized crime as one of the largest businesses in the United States, even ahead of the automobile industry. Although police and governmental officials continue to fight organized crime, most mobsters have tremendous amounts of money to fight back with high-powered attorneys.

Read more:,articleId-26876.html#ixzz0rP6cJRX8
The Criminal Justice System
Becoming a crime statistic is probably the greatest fear among Americans. To deal with crime and deter criminals, American society makes use of formal social controls, particularly the criminal justice system. Sadly, the American criminal justice system is biased. The likelihood of being arrested, convicted, and sentenced appears to be clearly related to finances and social status.

The poor are more likely than the wealthy to be arrested for any category of crime. Why?


Unlike the wealthy who can commit crimes in the seclusion of their offices or homes, the poor have little privacy. This means the poor are more visible to the police, as well as to other citizens who may complain to law officials.

Biases in police training and experience may cause police officers to blindly blame crimes on certain groups, such as people of color and lower-class juveniles.

Finally, the fear of political pressure and “hassles” may prompt law enforcement officers to avoid arresting more affluent and influential members of society.

Poor people typically cannot post bail, so they must wait in jail for their trial. Hence, they are unable to actively work in their own defense. Moreover, when the time for the trial comes, defendants who are not out on bail look guilty because they must enter the courtroom led by police—probably influencing judges and juries. A person who was released on bail enters the courtroom like any other citizen. Social research even indicates that defendants who pay their bail are more likely to be acquitted than those who do not.

Even though the United States entitles all defendants to legal counsel, the quality of this assistance varies. Poor people receive court-appointed lawyers, who may receive lower wages and have a heavy caseload. These lawyers may rush the cases of poor defendants in the interest of time and effort. On the other hand, affluent defendants hire teams of skilled and resourceful lawyers who know how to “work the system.”

When it comes to sentencing, the poor generally receive tougher penalties and longer prison terms than do the more affluent convicted of the same crimes. The race of the victims seems to play a role in the harshness of sentencing as well. Regardless of the murderer's race, those murdering whites are more likely to receive the death sentence than those killing minorities.

Nonetheless, the criminal justice system and prison system serve society in several potentially useful ways:


By being placed in jail, convicted criminals receive “just rewards,” or retribution, for their crimes.

Prisons ideally should deter crimes. As the theory goes, prisons are supposed to keep released criminals from offending again and potential criminals from committing crimes. The social research on this question of deterrence provides mixed results. Prison seems to deter white-collar criminals, for example, but does nothing to deter sex offenders. The literature remains inconclusive with respect to the effects of deterrence on non-criminals.

Prisons isolate criminals from the general public.

Prisons ideally serve to rehabilitate criminals into productive citizens who no longer commit crimes. Programs within prisons designed to rehabilitate prisoners include education, personal counseling, and vocational training to prepare them for eventual release and parole.

A short report card on how well prisons achieve their purposes: Prisons successfully punish and isolate inmates, but they seem to be less successful at rehabilitating inmates and deterring future crimes.

Today's prisons are overcrowded, as inmate populations have increased dramatically in recent decades. Excessively brutal conditions cause prisoners to experience a wide variety of health problems, such as heart disease, hypertension, psychological disorders, and suicide. And although incarcerated populations continue to grow, the number of crimes committed in the United States also increases. Sociologists are quick to admit that they have no easy answers that explain the growth in prison populations and crimes, or easy solutions (for example, in-home detention, early parole) to change this situation.

Read more:,articleId-26877.html#ixzz0rP6z5p7I
What Divides Us: Stratification
Social stratification refers to the unequal distribution around the world of the three Ps: property, power, and prestige. This stratification forms the basis of the divisions of society and categorizations of people. In the case of the latter, social classes of people develop, and moving from one stratum to another becomes difficult.

Normally property (wealth), power (influence), and prestige (status) occur together. That is, people who are wealthy tend also to be powerful and appear prestigious to others. Yet this is not always the case. Plumbers may make more money than do college professors, but holding a professorship is more prestigious than being a “blue collar worker.”

The three “Ps” form the basis of social stratification in the United States and around the world, so a detailed discussion of these social “rewards” is in order.

Read more:,articleId-26879.html#ixzz0rP7Hq7Ay
Karl Marx assigned industrial society two major and one minor classifications: the bourgeoisie (capitalist class), petite bourgeoisie (small capitalist class), and proletariat (worker class). Marx made these divisions based on whether the “means of production” such as factories, machines, and tools are owned, and whether workers are hired. Capitalists are those who own the methods of production and employ others to work for them. Workers are those who do not own the means of production, do not hire others, and thus are forced to work for the capitalists. Small capitalists are those who own the means of production but do not employ others. These include self-employed persons, like doctors, lawyers, and tradesmen. According to Marx, the small capitalists are only a transitional, minor class that is ultimately doomed to becoming members of the proletariat.

Marx held that exploitation is the inevitable outcome of the two major classes attempting to coexist within the same society. In order to survive, workers are coerced into working long, hard hours under less-than-ideal circumstances to maximize the profits of the capitalists. Marx also held that given enough discontent with their exploitation, workers would subsequently organize to revolt against their “employers” to form a “classless” society of economic equals. Marx's predictions of mass revolution never materialized in any highly advanced capitalist society. Instead, the extreme exploitation of workers that Marx saw in the 1860s eventually eased, which resulted in the formation of a large and prosperous white collar population.

Despite Marx's failed predictions, substantial economic inequalities exist today in the United States. Wealth refers to the assets and income-producing things that people own: real estate, savings accounts, stocks, bonds, and mutual funds
. Income refers to the money that people receive over a certain period of time, including salaries and wages. Current social statistics indicate the poorest 20 percent of Americans earn less than 5 percent of the total national income, while the wealthiest 20 percent earn nearly 50 percent of the total. Further, the poorest 20 percent hold far less than 1 percent of the total national wealth, while the wealthiest 20 percent own over 75 percent of the total.

The second basis of social stratification is power, or the capacity to influence people and events to obtain wealth and prestige. That is, having power is positively correlated with being rich, as evidenced by the domination of wealthy males in high-ranking governmental positions. Wealthier Americans are also more likely to be politically active as way of ensuring their continued power and wealth. In contrast, poorer Americans are less likely to be politically active, given their sense of powerlessness to influence the process.

Because wealth is distributed unequally, the same is clearly true of power. Elite theorists argue that a few hundred individuals hold all of the power in the United States. These power elite, who may come from similar backgrounds and have similar interests and values, hold key positions in the highest branches of the government, military, and business world. Conflict theorists hold that only a small number of Americans—the capitalists—hold the vast majority of power in the United States. They may not actually hold political office, but they nonetheless influence politics and governmental policies for their own benefit and to protect their interests. An example is the large corporation that tries to limit the amount of fees it must pay through political contributions that ultimately put certain people into office who then sway policy decisions.

On the other hand, pluralist theorists hold that power is not in the hands of the elite or a few, but rather it is widely distributed among assorted competing and diverse groups. In other words, unlike elitists and Marxists, pluralists note little if any inequality in the distribution of power. For instance, citizens can influence political outcomes by voting candidates into or out of office. And the power of labor groups is balanced by the power of businesses, which is balanced by the power of the government. In a democracy, no one is completely powerless.

A final basis of social stratification is the unequal distribution of prestige, or an individual's status among his or her peers and in society. Although property and power are objective, prestige is subjective, for it depends on other people's perceptions and attitudes. And while prestige is not as tangible as money and influence, most Americans want to increase their status and honor as seen by others.

Occupation is one means by which prestige can be obtained. In studies of occupational prestige, Americans tend to answer consistently—even across the 1970s, 1980s, and 1990s. For example, being a physician ranks among the highest on the scale, whereas being a shoe shiner ranks near the bottom.

The way people rank professions appears to have much to do with the level of education and income of the respective professions. To become a physician requires much more extensive training than is required to become a cashier. Physicians also make a great deal more money than cashiers, ensuring their higher prestige ranking.

To occupation must be added social statuses based on race, gender, and age. Even though being a professor is highly ranked, also being a racial minority and a female may negatively affect prestige. As a result, individuals who experience such status inconsistency may suffer from significant anxiety, depression, and resentment.

Read more:,articleId-26879.html#ixzz0rP7YJfsr
Types of Social Classes of People
Social class refers to a group of people with similar levels of wealth, influence, and status. Sociologists typically use three methods to determine social class:


The objective method measures and analyzes “hard” facts.

The subjective method asks people what they think of themselves.

The reputational method asks what people think of others.

Results from these three research methods suggests that in the United States today approximately 15 to 20 percent are in the poor, lower class; 30 to 40 percent are in the working class; 40 to 50 percent are in the middle class; and 1 to 3 percent are in the rich, upper class.
The lower class

The lower class is typified by poverty, homelessness, and unemployment. People of this class, few of whom have finished high school, suffer from lack of medical care, adequate housing and food, decent clothing, safety, and vocational training. The media often stigmatize the lower class as “the underclass,” inaccurately characterizing poor people as welfare mothers who abuse the system by having more and more babies, welfare fathers who are able to work but do not, drug abusers, criminals, and societal “trash.”
The working class

The working class are those minimally educated people who engage in “manual labor” with little or no prestige. Unskilled workers in the class—dishwashers, cashiers, maids, and waitresses—usually are underpaid and have no opportunity for career advancement. They are often called the working poor. Skilled workers in this class—carpenters, plumbers, and electricians—are often called blue collar workers. They may make more money than workers in the middle class—secretaries, teachers, and computer technicians; however, their jobs are usually more physically taxing, and in some cases quite dangerous.
The middle class

The middle class are the “sandwich” class. These white collar workers have more money than those below them on the “social ladder,” but less than those above them. They divide into two levels according to wealth, education, and prestige. The lower middle class is often made up of less educated people with lower incomes, such as managers, small business owners, teachers, and secretaries. The upper middle class is often made up of highly educated business and professional people with high incomes, such as doctors, lawyers, stockbrokers, and CEOs.
The upper class

Comprising only 1 to 3 percent of the United States population, the upper class holds more than 25 percent of the nation's wealth. This class divides into two groups: lower-upper and upper-upper. The lower-upper class includes those with “new money,” or money made from investments, business ventures, and so forth. The upper-upper class includes those aristocratic and “high-society” families with “old money” who have been rich for generations. These extremely wealthy people live off the income from their inherited riches. The upper-upper class is more prestigious than the lower-upper class.

Wherever their money comes from, both segments of the upper class are exceptionally rich. Both groups have more money than they could possibly spend, which leaves them with much leisure time for cultivating a variety of interests. They live in exclusive neighborhoods, gather at expensive social clubs, and send their children to the finest schools. As might be expected, they also exercise a great deal of influence and power both nationally and globally.

Read more:,articleId-26880.html#ixzz0rP7pxR1x
Social Mobility
When studying social classes, the question naturally arises: Is it possible for people to move within a society's stratification system? In other words, is there some possibility of social mobility, or progression from one social level to another? Yes, but the degree to which this is possible varies considerably from society to society.

On the one hand, in a closed society with a caste system, mobility can be difficult or impossible. Social position in a caste system is decided by assignment rather than attainment. This means people are either born into or marry within their family's caste; changing caste systems is very rare. An example of the rigid segregation of caste systems occurs today in India, where people born into the lowest caste (the “untouchables”) and can never become members of a higher caste. South Africa also has a caste system.

On the other hand, in an open society with a class system, mobility is possible. The positions in this stratification system depend more on achieved status, like education, than on ascribed status, like gender. For example, the United States' social stratification is of this type, meaning movement between social strata is easier and occurs more frequently.
Patterns of social mobility

Several patterns of social mobility are possible:


Horizontal mobility involves moving within the same status category. An example of this is a nurse who leaves one hospital to take a position as a nurse at another hospital.

Vertical mobility, in contrast, involves moving from one social level to another. A promotion in rank in the Army is an example of upward mobility, while a demotion in rank is downward mobility.

Intragenerational mobility, also termed career mobility, refers to a change in an individual's social standing, especially in the workforce, such as occurs when an individual works his way up the corporate ladder.

Intergenerational mobility refers to a change in social standing across generations, such as occurs when a person from a lower-class family graduates from medical school.

Sociologists in the United States have been particularly interested in this latter form of mobility, as it seems to characterize the “American Dream” of opportunity and “rags to riches” possibilities.
Structural mobility and individual mobility

Major upheavals and changes in society can enhance large numbers of people's opportunities to move up the social ladder at the same time. This form of mobility is termed structural mobility. Industrialization, increases in education, and postindustrial computerization have allowed large groups of Americans since 1900 to improve their social status and find higher-level jobs than did their parents. Nevertheless, not everyone moves into higher-status positions. Individual characteristics—such as race, ethnicity, gender, religion, level of education, occupation, place of residence, health, and so on—determine individual mobility. In the United States, being a member of a racial minority, female, or a disabled person have traditionally limited the opportunities for upward mobility.

Read more:,articleId-26881.html#ixzz0rP8E2BB8
Causes and Effects of Poverty
Any discussion of social class and mobility would be incomplete without a discussion of poverty, which is defined as the lack of the minimum food and shelter necessary for maintaining life. More specifically, this condition is known as absolute poverty. Today it is estimated that more than 35 million Americans—approximately 14 percent of the population—live in poverty. Of course, like all other social science statistics, these are not without controversy. Other estimates of poverty in the United States range from 10 percent to 21 percent, depending on one's political leanings. This is why many sociologists prefer a relative, rather than an absolute, definition of poverty. According to the definition of relative poverty, the poor are those who lack what is needed by most Americans to live decently because they earn less than half of the nation's median income. By this standard, around 20 percent of Americans live in poverty, and this has been the case for at least the past 40 years. Of these 20 percent, 60 percent are from the working class poor.
Causes of poverty

Poverty is an exceptionally complicated social phenomenon, and trying to discover its causes is equally complicated. The stereotypic (and simplistic) explanation persists—that the poor cause their own poverty—based on the notion that anything is possible in America. Some theorists have accused the poor of having little concern for the future and preferring to “live for the moment”; others have accused them of engaging in self-defeating behavior. Still other theorists have characterized the poor as fatalists, resigning themselves to a culture of poverty in which nothing can be done to change their economic outcomes. In this culture of poverty—which passes from generation to generation—the poor feel negative, inferior, passive, hopeless, and powerless.

The “blame the poor” perspective is stereotypic and not applicable to all of the underclass. Not only are most poor people able and willing to work hard, they do so when given the chance. The real trouble has to do with such problems as minimum wages and lack of access to the education necessary for obtaining a better-paying job.

More recently, sociologists have focused on other theories of poverty. One theory of poverty has to do with the flight of the middle class, including employers, from the cities and into the suburbs. This has limited the opportunities for the inner-city poor to find adequate jobs. According to another theory, the poor would rather receive welfare payments than work in demeaning positions as maids or in fast-food restaurants. As a result of this view, the welfare system has come under increasing attack in recent years.

Again, no simple explanations for or solutions to the problem of poverty exist. Although varying theories abound, sociologists will continue to pay attention to this issue in the years to come.
The effects of poverty

The effects of poverty are serious. Children who grow up in poverty suffer more persistent, frequent, and severe health problems than do children who grow up under better financial circumstances.


Many infants born into poverty have a low birth weight, which is associated with many preventable mental and physical disabilities. Not only are these poor infants more likely to be irritable or sickly, they are also more likely to die before their first birthday.

Children raised in poverty tend to miss school more often because of illness. These children also have a much higher rate of accidents than do other children, and they are twice as likely to have impaired vision and hearing, iron deficiency anemia, and higher than normal levels of lead in the blood, which can impair brain function.

Levels of stress in the family have also been shown to correlate with economic circumstances. Studies during economic recessions indicate that job loss and subsequent poverty are associated with violence in families, including child and elder abuse. Poor families experience much more stress than middle-class families. Besides financial uncertainty, these families are more likely to be exposed to series of negative events and “bad luck,” including illness, depression, eviction, job loss, criminal victimization, and family death. Parents who experience hard economic times may become excessively punitive and erratic, issuing demands backed by insults, threats, and corporal punishment.

Homelessness, or extreme poverty, carries with it a particularly strong set of risks for families, especially children. Compared to children living in poverty but having homes, homeless children are less likely to receive proper nutrition and immunization. Hence, they experience more health problems. Homeless women experience higher rates of low-birth-weight babies, miscarriages, and infant mortality, probably due to not having access to adequate prenatal care for their babies. Homeless families experience even greater life stress than other families, including increased disruption in work, school, family relationships, and friendships.

Sociologists have been particularly concerned about the effects of poverty on the “black underclass,” the increasing numbers of jobless, welfare-dependent African Americans trapped in inner-city ghettos. Many of the industries (textiles, auto, steel) that previously offered employment to the black working class have shut down, while newer industries have relocated to the suburbs. Because most urban jobs either require advanced education or pay minimum wage, unemployment rates for inner-city blacks are high.

Even though Hispanic Americans are almost as likely as African Americans to live in poverty, fewer inner-city Hispanic neighborhoods have undergone the same massive changes as many black neighborhoods have. Middle and working class Hispanic families have not left their barrio, or urban Spanish-speaking neighborhood, in large numbers, so most Hispanic cultural and social institutions there remain intact. In addition, local Hispanic-owned businesses and low-skill industries support the barrio with wage-based, not welfare-based, businesses.

Climbing out of poverty is difficult for anyone, perhaps because, at its worst, poverty can become a self-perpetuating cycle. Children of poverty are at an extreme disadvantage in the job market; in turn, the lack of good jobs ensures continued poverty. The cycle ends up repeating itself until the pattern is somehow broken.
Feminist perspective on poverty

Finally, recent decades have witnessed the feminization of poverty, or the significant increase in the numbers of single women in poverty alone, primarily as single mothers. In the last three decades the proportion of poor families headed by women has grown to more than 50 percent. This feminization of poverty has affected African-American women more than any other group.

This feminization of poverty may be related to numerous changes in contemporary America. Increases in unwanted births, separations, and divorces have forced growing numbers of women to head poor households. Meanwhile, increases in divorced fathers avoiding child support coupled with reductions in welfare support have forced many of these women-headed households to join the ranks of the underclass. Further, because wives generally live longer than their husbands, growing numbers of elderly women must live in poverty.

Feminists also attribute the feminization of poverty to women's vulnerability brought about by the patriarchal, sexist, and gender-biased nature of Western society, which does not value protecting women's rights and wealth.

Read more:,articleId-26882.html#ixzz0rP8XEHKC
Race and Ethnicity Defined
The term race refers to groups of people who have differences and similarities in biological traits deemed by society to be socially significant, meaning that people treat other people differently because of them. For instance, while differences and similarities in eye color have not been treated as socially significant, differences and similarities in skin color have.

Although some scholars have attempted to establish dozens of racial groupings for the peoples of the world, others have suggested four or five. An example of a racial category is Asian (or Mongoloid), with its associated facial, hair color, and body type features. Yet too many exceptions to this sort of racial grouping have been found to make any racial categorizations truly viable. This fact has led many sociologists to indicate that no clear-cut races exist—only assorted physical and genetic variations across human individuals and groups.

Certainly, obvious physical differences—some of which are inherited—exist between humans. But how these variations form the basis for social prejudice and discrimination has nothing to do with genetics but rather with a social phenomenon related to outward appearances. Racism, then, is prejudice based on socially significant physical features. A racist believes that certain people are superior, or inferior, to others in light of racial differences. Racists approve of segregation, or the social and physical separation of classes of people.

Ethnicity refers to shared cultural practices, perspectives, and distinctions that set apart one group of people from another. That is, ethnicity is a shared cultural heritage. The most common characteristics distinguishing various ethnic groups are ancestry, a sense of history, language, religion, and forms of dress. Ethnic differences are not inherited; they are learned.

Most countries today consist of different ethnic groups. Ideally, countries strive for pluralism, where people of all ethnicities and races remain distinct but have social equality. As an example, the United States is exceptionally diverse, with people representing groups from all over the globe, but lacking in true pluralism. The same can be said of the ethic diversity of the former Soviet Union with its more than 100 ethnic groups, some having more than a million members.

Read more:,articleId-26884.html#ixzz0rP8nMzHa
Racial and ethnic groups whose members are especially disadvantaged in a given society may be referred to as minorities. This term has more to do with social factors than with numbers. For example, while people with green eyes may be in the minority, they are not considered to be “true” minorities. From a sociological perspective, minorities generally have a sense of group identity (“belonging together”) and separateness (“being isolated from others”). They are also disadvantaged in some way when compared to the majority of the population. Of course, not all minorities experience all three of these characteristics; some people are able to transcend their master status, or social identity as defined by their race or ethnicity.
Learn more about Lincoln Technical Institute

Most minority groups are locked in to their minority standing, regardless of their achieved level of personal success. They live in certain regions of a country, certain cities, and certain neighborhoods—the poorest areas, more often than not. That is, ethnicity is often associated with social inequalities of power, prestige, and wealth—all of which can lead to hostility between groups within a society. Finally, to preserve their cultural identity, most minorities value endogamy, or marriage within the group. Put another way, intermarriage between minority and majority groups, or even between different minority groups, is not always sanctioned by both groups. Prohibiting intermarriage reduces the possibility of assimilation, or gradual adoption of the dominant culture's patterns and practices.

Read more:,articleId-26885.html#ixzz0rP90OZ7i
Prejudice and Discrimination
Prejudice and discrimination have been prevalent throughout human history. Prejudice has to do with the inflexible and irrational attitudes and opinions held by members of one group about another, while discrimination refers to behaviors directed against another group. Being prejudiced usually means having preconceived beliefs about groups of people or cultural practices. Prejudices can either be positive or negative—both forms are usually preconceived and difficult to alter. The negative form of prejudice can lead to discrimination, although it is possible to be prejudiced and not act upon the attitudes. Those who practice discrimination do so to protect opportunities for themselves by denying access to those whom they believe do not deserve the same treatment as everyone else.

It is unfortunate that prejudices against racial and ethnic minorities exit, and continue to flourish, despite the “informed” modern mind. One well-known example of discrimination based on prejudice involves the Jews, who have endured mistreatment and persecution for thousands of years. The largest scale attempt to destroy this group of people occurred during World War II, when millions of Jews were exterminated in German concentration camps in the name of Nazi ideals of “racial purity.” The story of the attempted genocide, or systematic killing, of the Jews—as well as many other examples of discrimination and oppression throughout human history—has led sociologists to examine and comment upon issues of race and ethnicity.
The sources of prejudice

Sociologists and psychologists hold that some of the emotionality in prejudice stems from subconscious attitudes that cause a person to ward off feelings of inadequacy by projecting them onto a target group. By using certain people as scapegoats—those without power who are unfairly blamed—anxiety and uncertainty are reduced by attributing complex problems to a simple cause: “Those people are the source of all my problems.” Social research across the globe has shown that prejudice is fundamentally related to low self-esteem. By hating certain groups (in this case, minorities), people are able to enhance their sense of self-worth and importance.

Social scientists have also identified some common social factors that may contribute to the presence of prejudice and discrimination:


Socialization. Many prejudices seem to be passed along from parents to children. The media—including television, movies, and advertising—also perpetuate demeaning images and stereotypes about assorted groups, such as ethnic minorities, women, gays and lesbians, the disabled, and the elderly.

Conforming behaviors. Prejudices may bring support from significant others, so rejecting prejudices may lead to losing social support. The pressures to conform to the views of families, friends, and associates can be formidable.

Economic benefits. Social studies have confirmed that prejudice especially rises when groups are in direct competition for jobs. This may help to explain why prejudice increases dramatically during times of economic and social stress.

Authoritarian personality. In response to early socialization, some people are especially prone to stereotypical thinking and projection based on unconscious fears. People with an authoritarian personality rigidly conform, submit without question to their superiors, reject those they consider to be inferiors, and express intolerant sexual and religious opinions. The authoritarian personality may have its roots in parents who are unloving and aloof disciplinarians. The child then learns to control his or her anxieties via rigid attitudes.

Ethnocentrism. Ethnocentrism is the tendency to evaluate others' cultures by one's own cultural norms and values. It also includes a suspicion of outsiders. Most cultures have their ethnocentric tendencies, which usually involve stereotypical thinking.

Group closure. Group closure is the process whereby groups keep clear boundaries between themselves and others. Refusing to marry outside an ethnic group is an example of how group closure is accomplished.

Conflict theory. Under conflict theory, in order to hold onto their distinctive social status, power, and possessions, privileged groups are invested in seeing that no competition for resources arises from minority groups. The powerful may even be ready to resort to extreme acts of violence against others to protect their interests. As a result, members of underprivileged groups may retaliate with violence in an attempt to improve their circumstances.

Solutions to prejudice

For decades, sociologists have looked to ways of reducing and eliminating conflicts and prejudices between groups:


One theory, the self-esteem hypothesis, is that when people have an appropriate education and higher self-esteem, their prejudices will go away.

Another theory is the contact hypothesis, which states that the best answer to prejudice is to bring together members of different groups so they can learn to appreciate their common experiences and backgrounds.

A third theory, the cooperation hypothesis, holds that conflicting groups need to cooperate by laying aside their individual interests and learning to work together for shared goals.

A fourth theory, the legal hypothesis, is that prejudice can be eliminated by enforcing laws against discriminative behavior.

To date, solutions to prejudice that emphasize change at the individual level have not been successful. In contrast, research sadly shows that even unprejudiced people can, under specific conditions of war or economic competition, become highly prejudiced against their perceived “enemies.” Neither have attempts at desegregation in schools been successful. Instead, many integrated schools have witnessed the formation of ethnic cliques and gangs that battle other groups to defend their own identities.

Changes in the law have helped to alter some prejudiced attitudes. Without changes in the law, women might never have been allowed to vote, attend graduate school, or own property. And racial integration of public facilities in America might never have occurred. Still, laws do not necessarily change people's attitudes. In some cases, new laws can increase antagonism toward minority groups.

Finally, cooperative learning, or learning that involves collaborative interactions between students, while surely of positive value to students, does not assure reduction of hostility between conflicting groups. Cooperation is usually too limited and too brief to surmount all the influences in a person's life.

To conclude, most single efforts to eliminate prejudice are too simplistic to deal with such a complex phenomenon. Researchers, then, have focused on more holistic methods of reducing ethnocentrism and cultural conflicts. They have noted that certain conditions must be met before race relations will ever improve:


A desire to become better acquainted.

A desire to cooperate.

Equal economic standing and social status.

Equal support from society.

Sociologists speculate that one reason prejudice is still around is the fact that these conditions rarely coincide.

Read more:,articleId-26886.html#ixzz0rP9IFQzg
Native Americans
Native Americans, or “American Indians,” settled in North America long before any Europeans arrived. Yet they have now lived as foreigners and forgotten members of their own land for more than 200 years. In the 1800s, they were corralled onto “reservations” where they had few opportunities for growing food, hunting animals, or obtaining work. Only in this decade has the Native-American population grown to more than 2 million. Most live on reservations or in rural areas primarily located in the Western states. In recent decades, though, there has been an influx of Native Americans into urban areas.

American Indians are the poorest ethnic group in America. The vast majority live in substandard housing, and about 30 percent live in utter poverty, meaning they are very prone to malnutrition and diseases. The average Native American attends school less than 10 years, and the drop-out rate is double the national average. The rate of Indian unemployment is as high as 80 percent in some parts of the country. Further complicating these matters is the fact that the rate of alcoholism for Native Americans is more than five times that of other Americans.

Native Americans remain a tightly knit and culturally minded people. Neither urban nor rural Indians have necessarily lost their original tribal identities. For instance, the Navajo—who have the most populated and largest reservation in this country—have held fast to their cultural patterns, even though many Navajo have worked in industrial cities.

Inspired in part by the civil rights movement, Native Americans have become politically active in recent years. The American Indian Movement (AIM) is one example of how Indians from various tribes have organized to preserve their authentic culture, prevent further violations of their territorial rights, and pursue other legal matters. Although some goals have been attained, the small number of Native Americans in the United States
limits their lobbying power and legal pull.

Read more:,articleId-26888.html#ixzz0rP9Ys6Or
African Americans
Perhaps more than that of any other minority, the history of African Americans in this country has been a long and complex story. During the slave trade in the early history of the United States, millions of blacks were brought from Africa. By the late 1700s, almost 4 million slaves lived in the southern states. While some slave masters may have tried to treat their black slaves humanely, the slaves felt deeply their loss of homeland, family, and freedom. In addition, harsh working conditions, physical beatings, manacles, branding, and castration were common. Nor did the slaves always passively give in to their masters' whims; sometimes slaves rebelled against their masters. Out of all of this oppression evolved a variety of uniquely African-American forms of art and music, including gospel, jazz, and the blues.

The formal abolition of slavery during the Civil War forever altered the life of blacks, especially in the South. Yet the one form of group closure—slavery—was replaced by another—discrimination. The behavior of the former slaves was closely monitored, and they were quickly punished for their “transgressions.” The result was continual denial of black people's political and civil rights. Legislation was also passed legalizing segregation of blacks from whites in public places like trains and restaurants. However, following the Civil War, most states in the South passed legislation against the interests of blacks, which became known as the Jim Crow laws. The laws, for example, prohibited blacks from attending white schools, theaters, restaurants, and hotels.

During the early decades of the 20th century, thousands of southern blacks moved to the North because of increased job opportunities. However, this migration did not always set well with northerners. Race riots exploded, and blacks encountered numerous forms of discrimination in housing, work, and politics.

This migration north continued both during and after World War II, especially with the increased automation being used on southern plantations. But blacks migrating north after World War II soon found the opportunities for work slim. Increased automation coupled with unions that exerted control over many occupations added to the prejudice and discrimination experienced by blacks.
The Civil Rights Movement

Blacks were largely denied opportunities for education and personal advancement until the early 1950s and 1960s. It was only then that the National Urban League and the National Association for the Advancement of Colored People (NAACP) began to have an effect on black civil rights.

Even before World War II, social advocates began challenging segregation in the military, as well as on buses and in schools, restaurants, swimming pools, and other public places. In 1954, in Brown v. Board of Education of Topeka, Kansas, the Supreme Court declared that “separate educational facilities are inherently unequal”—a decision that formed the basis of the civil rights movement of the 1950s to the 1970s. The decision was strongly opposed in some states, and groups like the Ku Klux Klan (KKK), which had formed during reconstruction, organized to intimidate and persecute blacks.

In 1955 in Montgomery, Alabama, a 42-year-old African-American woman—Rosa Parks—refused to surrender her seat on a bus so that white people could sit. She was subsequently arrested, which spawned mass demonstrations and bus boycotts. Eventually, a Baptist minister, Martin Luther King, Jr., organized and led marches and campaigns of nonviolent resistance to discrimination. But responses to the movement were far from nonviolent. As examples, the National Guard would prohibit black students from entering public facilities, and police would disperse protesters with clubs, fire hoses, and attack dogs. Still, the demonstrations continued until, in 1964, Congress passed a Civil Rights bill banning discrimination in education, employment, public facilities, and in governmental agencies. Additional bills in later years were passed to outlaw discrimination in housing and to ensure the rights of blacks to vote. As might be expected, a great deal of resistance arose to the implementation of the new laws protecting blacks' civil rights. Civil rights leaders were threatened, beaten up, and even killed—as was the case of Martin Luther King, Jr., in 1968. During the remainder of the 1960s, major race riots broke out in cities across the nation.
Affirmative action

The 1960s and 1970s witnessed the beginnings of affirmative action, or action taken to provide equal opportunity, to end discrimination. Most instances of affirmative action have proven to be controversial. An example is forced school busing, in which children were taken by bus to schools outside their normal school districts in an attempt to force integration in school systems. Other examples of affirmative action attempted in the 1970s and 1980s included having “quotas” (percentages of represented groups) employed in public agencies and colleges. Critics of affirmative action argued that the resulting “reverse discrimination,” or action taken to provide opportunity only to underrepresented groups, was not a cure for social ills. Fixed quotas were finally declared illegal.
Blacks today

Sociologists debate the actual effects of the civil rights movement. Although the movement seems to have had some impact, the degree to which that impact has caused lasting social change remains in question.

The unemployment rate of blacks compared to whites is the same as it was in the early 1960s. Employment opportunities for black men have worsened, and a greater percentage of black men have given up on the work force. Likewise, little seems to have improved in terms of neighborhood segregation. Research has reaffirmed that blacks continue to be victims of discrimination in the real estate market.

On a more positive note, black and white children now attend the same schools, and black and white students attend the same colleges. Some urban schools and colleges, however, have larger numbers of black students because of the movement of whites to suburban and rural areas.

Blacks have also made some gains in elective politics; the number of black public officials has increased dramatically since the 1960s. Yet these changes are still relatively minor, as blacks make up only a few percentage points of the one-half million elected public offices in the United States.

Read more:,articleId-26889.html#ixzz0rP9pjefz
Hispanic American
The modern United States includes those areas annexed in 1848 as a result of the American war with Mexico. The descendants of those Mexican people, as well those of other culturally Spanish countries, are referred to as Hispanics or Latinos. Four primary groups of Hispanics exists in the United States today: Mexican Americans, Puerto Ricans, Cubans, and smaller Spanish-speaking groups from Central and South America. Whether due to large waves of immigrations since the 1960s or the tendency to have large families, the Hispanic population in this country is increasing at a remarkable rate. The Hispanic population may overtake the black population within the next few decades.
Mexican Americans

Although Mexican Americans live throughout the United States, the largest numbers tend to be concentrated in the Southwest. Although most Mexican Americans live in cities, they generally live in barrios, which are Spanish-speaking neighborhoods. Some 20 percent live in poverty, and most work in menial jobs. Because many speak only minimal English, they can expect few educational and job opportunities. Some Mexican Americans have resisted assimilating into the dominant English-speaking culture, instead preferring to preserve their own cultural identity.

Illegal immigration of workers from Mexico has been a problem for some time. Wages and benefits are significantly better in the United States, which prompts some desperate families—however great the risks involved—to attempt to enter the United States illegally in the hopes of securing an adequate future. Large numbers of these Mexicans are intercepted and sent back each year by immigration officials, but most simply try again. Illegal immigrants, who are usually willing to perform jobs that others will not, are employable at much cheaper wages than American workers. This leads to a variety of social and political issues for Americans, including increased welfare costs and discrimination.
Puerto Ricans

Puerto Ricans have been American citizens since the early 1900s, when Puerto Rico became a self-governing Commonwealth of the United States. Because Puerto Rico is a poor island, many of its residents have immigrated to the mainland to improve their circumstances. Puerto Ricans have tended to settle in New York City, where nearly half continue to live below the line of poverty. This has resulted in a reverse migration of Puerto Ricans back to their island in the hopes of finding a better life.

Puerto Rican activists continue to argue over the destiny of Puerto Rico. Today, Puerto Rico is not a full state within the United States; it is a commonwealth, or self-governing political entity that maintains a voluntary relationship with a larger nation. Whether Puerto Rico will become the 51st state of the United States, continue as a Commonwealth, or seek independence remains to be seen.

More than half a million Cubans left their home following Fidel Castro's rise to power in 1959. Unlike other Hispanic immigrants, most of these settled in Florida and came from professional and white-collar backgrounds. These early Cuban Americans have thrived in the United States, many in comparable positions to those they left in Cuba. In the 1980s another wave of Cuban immigration occurred, although these people tended to come from poorer conditions. Unlike their predecessors, these later Cuban immigrants have been on par with other Hispanic communities in this country. Both groups of Cuban immigrants are primarily political refugees.
Cite this article
Share This
Learn more about Lincoln Technical Institute

Read more:,articleId-26890.html#ixzz0rPA70462
Asian Americans
About 5 million Americans are of Asian heritage, the majority being Chinese, Japanese, and Filipino. Other Asian Americans include people from Korea, Vietnam, Pakistan, and India

Most Japanese and Chinese workers were brought into the United States by their employers beginning in the late 1800s. Japanese immigrants tended to settle in Pacific states, especially California. In one of America's darker moments, following the Japanese attack on Pearl Harbor during World War II, all Japanese-American citizens were forced to report to “relocation centers,” which were nothing more than concentration camps. After the war, these Japanese Americans integrated into larger American society rather than returning to segregated neighborhoods. Over the years, this minority group has rivaled the education and income levels of whites.

Chinese immigrants also settled in California and worked in such industries as railroad construction and mining. Working- and lower- class whites viewed these immigrants as a potential threat to their jobs, and so began an intense campaign of prejudice and discrimination against these people. The Chinese responded by forming distinct cultural neighborhoods, or “Chinatowns,” where they had a fighting chance to protect themselves from white aggression.

The Immigration Act of 1965 brought about increased immigration of Asians into the United States. Immigrant Chinese Americans now tend to avoid the “Chinatowns” of the native-born Chinese Americans. Many of these foreign-born Chinese work in menial occupations, but ever-growing numbers are working in professional positions. Today, Chinese Americans continue to integrate into mainstream society, where they often work, live, and socialize alongside whites and other groups.

Read more:,articleId-26891.html#ixzz0rPAYm1gE
Sex and Gender

According to traditional American norms, males and females of every age are supposed to play out their respective culturally defined masculine and feminine roles. But sociologists know that a fallacy exists here. Believing that one must live out a certain predetermined gender role is one of those “rules” of life that many follow, but few understand. Just because a society defines what behaviors, thoughts, and feelings are appropriately “masculine” and “feminine” does not mean that these role definitions are necessarily desirable. Hence, sociologists are especially interested in the effects that gender and society have on each other.

Gender refers to an individual's anatomical sex, or sexual assignment, and the cultural and social aspects of being male or female.

An individual's personal sense of maleness or femaleness is his or her gender identity.

Outward expression of gender identity according to cultural and social expectations is a gender role. Either gender can live out a gender role (for example, being a homemaker) but not a sex role, which is anatomically limited to one gender (gestating and giving birth being limited to females, for example).

An individual's sexual orientation refers to her or his relative attraction to members of the same sex ( homosexual), other sex ( heterosexual), or both sexes ( bisexual).

All of these—gender, sexual assignment, gender identity, gender role, sex role, and sexual orientation—form an individual's sexual identity.

Read more:,articleId-26893.html#ixzz0rPAmaWeL
Gender Identity
Sociologists are particularly interested in gender identity and how (or if) it determines gender roles. Gender identity appears to form very early in life and is most likely irreversible by age 4. Although the exact causes of gender identity remain unknown, biological, psychological, and social variables clearly influence the process. Genetics, prenatal and postnatal hormones, differences in the brain and the reproductive organs, and socialization all interact to mold a person's gender identity.
Biological influences on gender identity

Sexual differentiation, which encompasses the physiological processes whereby females become females and males become males, begins prenatally. The differences brought about by physiological processes ultimately interact with social-learning influences postpartum (after birth) to establish firmly a person's gender identity.

Genetics is the scientific study of heredity. Geneticists study genes, the basic units of heredity that determine inherited characteristics. Genes are composed of deoxyribonucleic acid (DNA). Three primary patterns of genetic transmission are dominant (expressed trait that is visibly apparent), recessive (unexpressed trait that is not visibly apparent), and sex-linked inheritance (trait carried on one of the sex chromosomes, usually X).

Determination of an embryo's chromosomal sex is genetic, occurring at conception. This process involves chromosomes, which are the biological structures containing biological “blueprints,” or genes. The egg, or ovum, always carries an X chromosome, and the sperm carries either a Y or an X chromosome. A zygote is the product of conception: a fertilized egg. A male zygote (XY) is the product of the fusion of an egg with a sperm carrying a Y chromosome; a female zygote (XX), the product of the fusion of an egg with a sperm carrying an X chromosome. The X chromosome provides valuable genetic material essential to life and health. The Y chromosome is smaller than the X, and carries little more than directions for producing a male.
Psychological and social influences on gender identity

Gender identity is ultimately derived from both chromosomal makeup and physical appearance, but this does not mean that psychosocial influences are missing. Socialization, or the process whereby a child learns the norms and roles that society has created for his or her gender, plays a significant role in the establishment of her or his sense of femaleness or maleness. If a child learns she is a female and is raised as a female, the child believes she is female; if told he is a male and raised as a male, the child believes he is male.

Beginning at birth, most parents treat their children according to the child's gender as determined by the appearance of their genitals. Parents even handle their baby girls less aggressively than their baby boys. Children quickly develop a clear understanding that they are either female or male, as well as a strong desire to adopt gender-appropriate mannerisms and behaviors. This normally occurs within two years, according to many authorities. In short, biology “sets the stage,” but children's interactions with the social environment actually determine the nature of gender identity.

Some people are unable to merge the biological, psychological, and social sides of their gender. They suffer gender dysphoria, or emotional confusion and pain over their gender identity. Specifically, some believe they were born into the wrong-gender body, that their internal sense of gender is inconsistent with their external sexual biology. This condition is termed transsexualism. Transsexuals may desire to be rid of their primary and secondary sexual structures and acquire those of the other sex by undergoing sex-reassignment surgery. Transsexuals should not be confused with transvestites, who enjoy wearing the clothing of the other gender.

Read more:,articleId-26894.html#ixzz0rPB2eNlQ
Gender Roles
Gender roles are cultural and personal. They determine how males and females should think, speak, dress, and interact within the context of society. Learning plays a role in this process of shaping gender roles. These gender schemas are deeply embedded cognitive frameworks regarding what defines masculine and feminine. While various socializing agents—parents, teachers, peers, movies, television, music, books, and religion—teach and reinforce gender roles throughout the lifespan, parents probably exert the greatest influence, especially on their very young offspring.

As mentioned previously, sociologists know that adults perceive and treat female and male infants differently. Parents probably do this in response to their having been recipients of gender expectations as young children. Traditionally, fathers teach boys how to fix and build things; mothers teach girls how to cook, sew, and keep house. Children then receive parental approval when they conform to gender expectations and adopt culturally accepted and conventional roles. All of this is reinforced by additional socializing agents, such as the media. In other words, learning gender roles always occurs within a social context, the values of the parents and society being passed along to the children of successive generations.

Gender roles adopted during childhood normally continue into adulthood. At home, people have certain presumptions about decision-making, child-rearing practices, financial responsibilities, and so forth. At work, people also have presumptions about power, the division of labor, and organizational structures. None of this is meant to imply that gender roles, in and of themselves, are good or bad; they merely exist. Gender roles are realities in almost everyone's life.

Read more:,articleId-26895.html#ixzz0rPBHkbVp
Gender Stereotypes
Gender stereotypes are simplistic generalizations about the gender attributes, differences, and roles of individuals and/or groups. Stereotypes can be positive or negative, but they rarely communicate accurate information about others. When people automatically apply gender assumptions to others regardless of evidence to the contrary, they are perpetuating gender stereotyping. Many people recognize the dangers of gender stereotyping, yet continue to make these types of generalizations.

Traditionally, the female stereotypic role is to marry and have children. She is also to put her family's welfare before her own; be loving, compassionate, caring, nurturing, and sympathetic; and find time to be sexy and feel beautiful. The male stereotypic role is to be the financial provider. He is also to be assertive, competitive, independent, courageous, and career-focused; hold his emotions in check; and always initiate sex. These sorts of stereotypes can prove harmful; they can stifle individual expression and creativity, as well as hinder personal and professional growth.

The weight of scientific evidence demonstrates that children learn gender stereotypes from adults. As with gender roles, socializing agents—parents, teachers, peers, religious leaders, and the media—pass along gender stereotypes from one generation to the next.

One approach to reexamining conventional gender roles and stereotypes is androgyny, which is the blending of feminine and masculine attributes in the same individual. The androgyne, or androgynous person, does not neatly fit into a female or male gender role; she or he can comfortably express the qualities of both genders. Parents and other socializing agents can teach their children to be androgynous, just as they can teach them to be gender-biased.

Emerging as a powerful sociopolitical force beginning in the 1960s, the feminist movement, or women's liberation movement, has lobbied for the rights of women and minorities. Feminists have fought hard to challenge and redefine traditional stereotypic gender roles.

Read more:,articleId-26896.html#ixzz0rPBaUfJS
Social Stratification and Gender
Throughout most of recorded history and around the globe, women have taken a “back seat” to men. Generally speaking, men have had, and continue to have, more physical and social power and status than women, especially in the public arena. Men tend to be more aggressive and violent then women, so they fight wars. Likewise, boys are often required to attain proof of masculinity through strenuous effort. This leads to males holding public office, creating laws and rules, defining society, and—some feminists might add—controlling women. For instance, not until this century were women in the United States allowed to own property, vote, testify in court, or serve on a jury. Male dominance in a society is termed patriarchy.

Whereas in recent decades major strides toward gender equality have been made, sociologists are quick to point out that much remains to be done if inequalities in the United States are ever to be eliminated. Behind much of the inequalities seen in education, the workplace, and politics is sexism, or prejudice and discrimination because of gender. Fundamental to sexism is the assumption that men are superior to women.

Sexism has always had negative consequences for women. It has caused some women to avoid pursuing successful careers typically described as “masculine”—perhaps to avoid the social impression that they are less desirable as spouses or mothers, or even less “feminine.”

Sexism has also caused women to feel inferior to men, or to rate themselves negatively. In Philip Goldberg's classic 1968 study
, the researcher asked female college students to rate scholarly articles that were allegedly written by either “John T. McKay” or “Joan T. McKay.” Although all the women read the same articles, those who thought the author was male rated the articles higher than the women who thought the author was female. Other researchers have found that men's resumes tend to be rated higher than women's. More recently, though, researchers have found the gap in these sorts of ratings to be closing. This may be due to social commentary in the media regarding sexism; growing numbers of successful women in the workforce, or discussion of Goldberg's findings in classrooms.

In short, sexism produces inequality between the genders—particularly in the form of discrimination. In comparable positions in the workplace, for example, women generally receive lower wages than men. But sexism can also encourage inequality in more subtle ways. By making women feel inferior to men, society comes to accept this as the truth. When that happens, women enter “the race” with lower self-esteem and fewer expectations, often resulting in lower achievements.

Sexism has brought gender inequalities to women in many arenas of life. But inequality has been a special problem in the areas of higher education, work, and politics.

Read more:,articleId-26897.html#ixzz0rPBpUJnt
Economy Defined
The economy is a social system that produces, distributes, and consumes goods and services in a society. Three sectors make up an economy: primary, secondary, and tertiary.


The primary sector refers to the part of the economy that produces raw materials, such as crude oil, timber, grain, or cotton.

The secondary sector, made up of mills and factories, turns the raw materials into manufactured goods, like fuel, lumber, flour, or fabric.

The tertiary sector refers to services rather than goods, and includes distribution of manufactured goods, food and hospitality services, banking
, sales, and professional services like architects, physicians, and attorneys.

Read more:,articleId-26905.html#ixzz0rPCO6Swx
Three key principles define the economic system of capitalism:


Private ownership of production and distribution of goods and services

Competition, or the laws of supply and demand directing the economy

Profit-seeking, or selling goods and services for more than their cost of production

Laissez-faire (French for “hands off”) capitalism represents a pure form of capitalism not practiced by any nation today. The ideology driving today's capitalism says that competition is in the best interest of consumers. Companies in competition for profit will make better products cheaper and faster to gain a larger share of the market. In this system, the market—what people buy and the laws of supply and demand—dictates what companies make, and how much of it they make. Workers are motivated to work harder so they can afford more of the products they want.

Supporters of a capitalist system point to the higher production, greater wealth, and higher standard of living displayed by capitalist countries such as the United States. Critics, however, charge that while the standard of living may be higher, greater social inequity remains. They also denounce greed, exploitation, and high concentration of wealth and power held by a few.

Read more:,articleId-26907.html#ixzz0rPCmcHG8
Three key principles also define the economic system of socialism:


State ownership of production and distribution of goods and services

Central economy

Production without profit

The ideology of socialism directly rejects the ideology of capitalism. In a socialist economic system, the state determines what to produce and at what price to sell it. Socialism eliminates competition and profit, and focuses upon social equality—supplying people with what they need, whether or not they can pay. The ideals driving socialism come from Karl Marx who saw all profit as money taken away from workers. He reasoned that the labor used to produce a product determined the value of that product. The only way for a company to arrive at profit is to pay workers less than the value of the product. Thus, wherever profit exists, workers are not receiving the true value of their labor.

Just as nations do not adhere to pure capitalism, neither do nations adhere to pure socialism. In pure socialism, all workers would earn exactly the same wage. Most Socialist countries do pay managers and professionals such as doctors a higher wage. But, because the state employs all members of the society, thereby controlling the wages, far less disparity exists between highest and lowest wage earners.

Supporters of socialism, then, point to its success at achieving social equality and full employment. Critics counter that with central planning's gross inefficiency, the economy cannot produce wealth and all people are poorer. They also object to what they see as unnecessary control of personal lives and limited rights—exploitation.

Read more:,articleId-26907.html#ixzz0rPCxrJj7
Democratic socialism and state capitalism
Ironically, critics of both capitalism and socialism accuse each system of exploitation. Consequently, some nations have carved out systems more in the middle of the continuum between the two.

One hybrid is democratic socialism, which is an economic system where the government maintains strict economic controls while maintaining personal freedom. Scandinavian nations, Canada, England, and Italy all practice democratic socialism. Sweden provides the most common example in which high taxation provides extensive social programs.

State capitalism is another economic hybrid. In this economic system, large corporations work closely with the government, and the government protects their interests with import restrictions, investment capital
, and other assistance. This economic system commonly exists in Asian countries such as Japan and South Korea.

Read more:,articleId-26907.html#ixzz0rPDATslc
Modern Corporations and Multinationals
A corporation is a business that is legally independent from its members. Corporations may incur or pay debt, negotiate contracts, sue and be sued. Corporations range in size from local retail stores
to Ford Motor Company or General Electric, the nation's largest corporation. These larger corporations sell stocks to shareholders, and the shareholders legally own the company. Management of the company remains separate from, but accountable to, the ownership. The shareholders are organized with a board of directors who hold regular meetings and make decisions on broad policies governing the corporation. Although many Americans own stock, they normally do not participate in regular board meetings or exert significant control over corporate decisions.

Sometimes corporations with closely related business may share board members, which is called an interlocking directorate. In this arrangement a manufacturer, a financial services company, and an insurance company with shared business also share the same board members. These few individuals, then, exert power over multiple companies whose business is interdependent.

A conglomerate is a corporation made up of many smaller companies, or subsidiaries, that may or may not have related business interests. The buying and selling of corporations for profit—rather than for the service or products they provide—form conglomerates. The process of corporate merger often leads to large layoffs because, as companies combine forces, many jobs are duplicated in the other company. For example, a conglomerate may take over a smaller company, including that company's marketing department. The conglomerate will already have a marketing department capable of handling most of the new acquisition's needs. Therefore, as many as half or all of the acquired marketing department employees would lose their jobs. The same situation often occurs when two corporations of a similar size merge.

Other types of corporations include monopolies, oligopolies, and multinationals. Monopolies occur when a single company accounts for all or nearly all sales of a product or service in a market. Monopolies are illegal in the United States because they eliminate competition and can fix prices, which hurts consumers. In other words, monopolies interfere with capitalism. The U.S. government does consider some monopolies legal, however, such as utilities where competition would be difficult to implement without distressing other social systems. But even utility monopolies have seen increased competition in recent years. Telephone companies were the first utility to witness a rise in competition with the breakup of AT&T in the 1980's. Recently electric power companies have seen deregulation and increased competition in some regions as well.

Oligopolies exist when several corporations have a monopoly in a market. The classic example of an oligopoly would be American auto makers until the 1980s. Ford, Chrysler, and General Motors manufactured nearly all vehicles built in America. As globalization has increased, so has competition, and few oligopolies exist today.

Multinationals are corporations that conduct business in many different countries. These corporations produce more goods and wealth than many smaller countries. Their existence, though, remains controversial. They garner success by entering less-developed nations, bringing industry into these markets with cheaper labor, and then exporting those goods to more-developed countries. Business advocates point to the higher standard of living in most countries where multinationals have entered the economy. Critics charge that multinationals exploit poor workers and natural resources, creating environmental havoc.

Read more:,articleId-26908.html#ixzz0rPDUBvXn
Labor Unions
In the face of large corporations, individual workers have typically felt alienated and vulnerable. While corporations may not hear the individual, laborers recognized that they would hear a united voice. This realization led the workers to develop labor unions—organized groups of laborers who advocate improved conditions and benefits for workers. Labor unions remained strong throughout much of the early and mid-twentieth century. In recent years, however, labor unions in the United States have lost numbers and power. With increasing globalization, corporate mergers, and downsizing, many experts expect to see an increase in labor unions again as workers seek stability and a greater share in the benefits of the global economy.

Read more:,articleId-26909.html#ixzz0rPHFf76E
Politics and Major Political Structure
The political system in use depends upon the nation-state. A nation is a people with common customs, origin, history, or language. A state, on the other hand, is a political entity with legitimate claim to monopolize use of force through police, military, and so forth. The term nation-state refers to a political entity with the legitimate claim to monopolize use of force over a people with common customs, origin, history, or language. Sociologists and political scientists prefer the term nation-state to “country” because it is more precise.

While many different political structures have existed throughout history, three major forms exist in modern nation-states: totalitarianism, authoritarianism, and democracy.

Read more:,articleId-26910.html#ixzz0rPHhJHAc
Totalitarianism is a political system that exercises near complete control over its citizens' lives and tolerates no opposition. Information is restricted or denied by complete control of mass media, close monitoring of citizens and visitors, and forbidding the gathering of groups for political purposes opposed to the state. Constant political propaganda, such as signs, posters, and media that focus the populace on the virtues of the government, characterizes these nation states. Obviously, some totalitarian governments maintain more extreme laws than others do. Totalitarian nation-states include North Korea, Chile, many African and Middle Eastern nations, Vietnam, and others.

Read more:,articleId-26910.html#ixzz0rPHx1iAP
Authoritarianism is a political system less controlling than totalitarianism, but still denying citizens the right to participate in government. A dictatorship, in which the primary authority rests in one individual, represents one type of authoritarian government. Dictators rule China, Cuba, Ethiopia, Haiti, and many African nations. In these systems, strong militaries and political parties support the dictators. Another form of authoritarianism is a monarchy, in which the primary authority rests in a family and is passed down through generations. In the past, most monarchies exerted near absolute power—in Saudi Arabia the ruling family still does. Most remaining monarchies today, however, such as those in the Scandinavian nations, Great Britain, Denmark, and the Netherlands, are constitutional monarchies where the royal families serve only as symbolic heads of state. Parliament or some form of democratic electoral process truly governs these nation states.

Read more:,articleId-26910.html#ixzz0rPI9krka
Democracy is a political system where the government is ruled either directly by the people or through elected officials who represent them. Most democracies today rely upon a system of representatives to make decisions. The most common examples of democracies are the United States, Canada, Germany, and many other European nations.

Read more:,articleId-26910.html#ixzz0rPIN5oeD
Politics in the United States
The election of public officials and the balance of power between the three branches of government (executive, legislative, and judicial) carry out democracy in the United States. This system, which makes each branch accountable to the others, restricts the authority of any one branch of the government
Click here to find out more!

The legislative branch
, or Congress (comprised of the House of Representatives and the Senate), writes, amends, and passes bills, which the President, as head of the executive branch, must then sign into law.

The executive branch through the President may veto any bill. If the President does veto a bill, the legislative branch may overturn this action with a two-thirds majority in both legislative houses.

The judicial branch, or Supreme Court, may overturn any law passed by the legislature and signed by the President.

The people elect the executive and legislative branches, while the executive branch appoints the members of the judicial branch, subject to approval by the legislature.

The most prominent election in the United States is that of President. While many people mistakenly believe that the popular vote or the Congress directly elects the President, the Electoral College (whose vote is dictated by the popular vote) officially elects the President. To maintain a balance of power, states elect the legislature separately. Each state elects two representatives to the Senate for six years; only a portion of the Senate seats come up for election every two years. States have a varying number of congressional seats based on population. Thus, for example, California elects more representatives than other Western states because it has a higher population. Population is constitutionally determined through a 10-year national census.

The President appoints the U.S. Supreme Court (the nine-member judicial branch), but both branches of the legislature must approve the President's choices. This appointment is for life to remove the justice system from short-term political influence.
The two-party system

Two predominant political parties comprise the United States government—Republicans and Democrats:


Republicans generally espouse more conservative (or “right”) views and support policies to reduce federal regulations, strengthen the military, and boost capitalist endeavors.

Democrats, on the other hand, generally lean toward more liberal (or “left”) opinions and support policies to strengthen social services, protect the environment, and make businesses accountable to labor.

Although the parties possess different philosophical stances, a continuum exists between them. The United States system is unlike most democracies, which have more than two parties. In multi-party systems, political groups with specialized agendas (such as labor, business, and environment) represent their interests. With the more generalized American system, the two parties must appeal to a broader range of people to be elected. Therefore, both parties work to appear “centrist”—that is, neither too liberal nor too conservative. In this system, third party candidates face great difficulty getting elected. In fact, third-party candidates have only found success at the state and local level. The last time voters elected a third-party president was in 1860 when Abraham Lincoln became President. Yet third-party candidates have begun to influence present-day elections and may prompt an eventual restructuring of the two traditional political parties.
Lobbyists and Political Action Committees (PACs)

Without specific representation in multiple political parties, special interest groups must find alternative methods of getting their voices heard in the legislative process. Many companies and other groups hire professional lobbyists to advocate for their causes.

A lobbyist is someone paid to influence government agencies, legislators, and legislation to the best interests of their clients. Lobbyists may even write the legislation that the legislator presents to a committee or the legislature. Lobbyists represent nearly all industries and interests, including insurance, auto manufacturing, tobacco, environment, women, minorities, education, technology, textiles, farming, and many others. Lobbyists, who are usually lawyers, are often former members of the legislature or have held other government positions. Companies and interest groups hire them because of their influence and access from their former jobs. For example, after spending decades as a Senator from Oregon and leaving office in disgrace over misconduct, Bob Packwood returned to Washington, D.C. as a paid lobbyist for business interests in the Pacific Northwest.

Political Action Committees, or PACs, are special interest groups that raise money to support and influence specific candidates or political parties. These groups may take an interest in economic or social issues, and include groups as diverse as the American Medical Association, the Trial Lawyers Association, the National Education Association, and the National Rifle Association. In recent years these groups have proved to be powerful and wealthy forces in elections. They often possess more money than the candidates and can run advertising campaigns that support or oppose the viewpoints or actions of a candidate running for office. They may also heavily influence state or local campaigns for ballot measures. PACs bear much of the responsibility for drastic increases in campaign spending in recent years. Many groups and officials are now calling for restrictions on such spending to limit PAC influence and maintain a balance of power among all interested constituencies.
The Pluralist and Power-Elite Models of politics

Sociologists recognize two main models when analyzing political structures, particularly in the United States:


The Pluralist Model argues that power is dispersed throughout many competing interest groups and that politics is about negotiation. One gains success in this model through forging alliances, and no one group always gets its own way.

The Power-Elite Model argues the reverse, claiming that power rests in the hands of the wealthy—particularly business, government, and the military. These theorists claim that, because power is so heavily concentrated in a few at the top, the average person cannot be heard. In addition, they say that the competitors who are claimed to work as balances simply do not exist.

Experts examining these diverse viewpoints recognize substantial research to support both views.

Read more:,articleId-26911.html#ixzz0rPItaFxT
Universal Education: Growth and Function
Education generally refers to a social institution responsible for providing knowledge, skills, values, and norms.

Universal education in the United States grew out of the political and economic needs of a diverse and fledgling nation. Immigrants came from many cultures and religious beliefs; consequently, no common national culture existed. Without a cohesive structure to pass on the democratic values that had brought the country's independence, the new nation risked fragmentation.

Founding Father Thomas Jefferson and dictionary-compiler Noah Webster recognized in the 1800s that democracy depended upon a well-educated, voting populace able to reason and engage in public debate. The nation did not fully realize their vision of education immediately. Many states saw “the nation” as a conglomeration of nation states. This fragmented political atmosphere created an education system with no system at all: Each locality administered its own system with no connection to any other locality. To complicate matters, public schools at that time required tuition, making them inaccessible to the poor, unless the poor were fortunate enough to attend for free. Many religious groups opened parochial schools, but, again, only the rich could afford to attend. Only the wealthiest could afford high school and college. Furthermore, while the political structure may have required an educated voter, the economic structure, which was still based on agriculture, did not require an educated worker.
Horace Mann and tax-supported education

The fact that average citizens could not afford to send their children to school outraged Horace Mann, a Massachusetts educator now called the “father of American education.” To solve this problem, in 1837 he proposed that taxes be used to support schools and that the Massachusetts government establish schools throughout the state. These “common schools” proved such a success that the idea spread rapidly to other states. Mann's idea coincided with a nation about to undergo industrialization and increasing demands from labor unions to educate their children. The Industrial Revolution generated a need for a more specialized, educated work force. It also created more jobs, which brought more immigrants. Political leaders feared that too many competing values would dilute democratic values and undermine stability, so they looked to universal education as a means of Americanizing immigrants into their new country.

As the need for a specialized, educated workforce continued to increase, so did education and its availability. This led to compulsory education; all states had mandates by 1918 that all children must attend school through the eighth grade or age 16. High school was optional, and society considered those with an eighth-grade education fully educated. As of 1930, less than 20 percent of the population graduated from high school; by 1990 more than 20 percent graduated from college.
The rise of the credential society

The need for a specialized workforce has increased exponentially over the decades. Today, Americans live in a credential society (one that depends upon degrees and diplomas to determine eligibility for work). Employers, predominantly in urban areas, who must draw from a pool of anonymous applicants need a mechanism to sort out who is capable of work and who is not. Those who have completed a college degree have demonstrated responsibility, consistency, and presumably, basic skills. For many positions, companies can build upon the basic college degree with specific job training. Some professions require highly specialized training that employers cannot accommodate, however. Lawyers, physicians, engineers, computer technicians, and, increasingly, mechanics must complete certified programs—often with lengthy internships—to prove their competency.

The demand for credentials has become so great that it is changing the face of higher education. Many students who attend college for a year or two (or even complete a two-year Associate's Degree), and then enter the workforce in an entry-level job, may find themselves needing a four-year degree. They discover that while employers hire those without four-year degrees, advancement in the company depends upon the credential of a Bachelor's degree. Oftentimes, regardless of their years of experience or competence on the job, employees who have the appropriate credentials receive advancement. Once again, economics changes education. Most employees with families and full-time employment cannot afford to quit work or work part-time and attend college.

Many colleges have responded with alternative educational delivery systems for those who are employed full time. For example:


At some colleges, students with a minimum number of credits may apply for accelerated degree programs offered in the evenings or on Saturdays.

Some colleges allow students to attend courses one night per week for 18 to 24 months and complete all the course work needed for a specific four-year degree, such as Business Administration.

This demand for credentialed employees combined with new educational opportunities such as internet courses, video classes, and home study
has changed the demographics of colleges that offer these programs. In some cases, nontraditional students, or adult learners, comprise as many as half of the students attending a college.

Read more:,articleId-26913.html#ixzz0rPJicVKv
The functionalist theory
The functionalist theory focuses on the ways that universal education serves the needs of society. Functionalists first see education in its manifest role: conveying basic knowledge and skills to the next generation. Durkheim (the founder of functionalist theory) identified the latent role of education as one of socializing people into society's mainstream. This “moral education,” as he called it, helped form a more-cohesive social structure by bringing together people from diverse backgrounds, which echoes the historical concern of “Americanizing” immigrants.

Functionalists point to other latent roles of education such as transmission of core values and social control. The core values in American education reflect those characteristics that support the political and economic systems that originally fueled education. Therefore, children in America receive rewards for following schedules, following directions, meeting deadlines, and obeying authority.

The most important value permeating the American classroom is individualism—the ideology that advocates the liberty rights, or independent action, of the individual. American students learn early, unlike their Japanese or Chinese counterparts, that society seeks out and reveres the best individual, whether that person achieves the best score on a test or the most points on the basketball court. Even collaborative activities focus on the leader, and team sports single out the one most valuable player of the year. The carefully constructed curriculum helps students develop their identities and self-esteem. Conversely, Japanese students, in a culture that values community in place of individuality, learn to be ashamed if someone singles them out, and learn social esteem—how to bring honor to the group, rather than to themselves.

Going to school in a capitalist nation, American students also quickly learn the importance of competition, through both competitive learning games in the classroom, and through activities and athletics outside the classroom. Some kind of prize or reward usually motivates them to play, so students learn early to associate winning with possessing. Likewise, schools overtly teach patriotism, a preserver of political structure. Students must learn the Pledge of Allegiance and the stories of the nation's heroes and exploits. The need to instill patriotic values is so great that mythology often takes over, and teachers repeat stories of George Washington's honesty or Abraham Lincoln's virtue even though the stories themselves (such as Washington confessing to chopping down the cherry tree) may be untrue.

Another benefit that functionalists see in education is sorting—separating students on the basis of merit. Society's needs demand that the most capable people get channeled into the most important occupations. Schools identify the most capable students early. Those who score highest on classroom and standardized tests enter accelerated programs and college-preparation courses. Sociologists Talcott Parsons, Kingsley Davis, and Wilbert Moore referred to this as social placement. They saw this process as a beneficial function in society.

After sorting has taken place, the next function of education, networking (making interpersonal connections), is inevitable. People in high school and college network with those in similar classes and majors. This networking may become professional or remain personal. The most significant role of education in this regard is matchmaking. Sociologists primarily interest themselves in how sorting and networking lead couples together of similar backgrounds, interests, education, and income potential. People place so much importance on this function of education that some parents limit their children's options for college to insure that they attend schools where they can meet the “right” person to marry.

Functionalists point to the ironic dual role of education in both preserving and changing culture. Studies show that, as students progress through college and beyond, they usually become increasingly liberal as they encounter a variety of perspectives. Thus, more educated individuals are generally more liberal, while less educated people tend toward conservatism. Moreover, the heavy emphasis on research at most institutions of higher education puts them on the cutting edge of changes in knowledge, and, in many cases, changes in values as well. Therefore, while the primary role of education is to preserve and pass on knowledge and skills, education is also in the business of transforming them.

A final and controversial function assumed by education in the latter half of the twentieth century is replacement of the family. Many issues of career development, discipline, and human sexuality—once the domain
of the family—now play a routine part in school curriculum. Parents who reject this function of education often choose to home-school their children or place them in private schools that support their values.

Read more:,articleId-26914.html#ixzz0rPKDqiDx
Reform of Education
In 1983, the National Commission on Excellence in Education issued a scathing review of American education titled “A Nation at Risk.” Although the Commission did find some successes, the majority of the report focused on the failure of American education to prepare students for competing in a global market
. Educational mediocrity, it claimed, caused lowered SAT (formerly the Scholastic Aptitude Test) scores, declining standards, grade inflation, poor performance in math and science, and functional illiteracy (reading and writing skills insufficient for daily living). The report also identified social promotion, which is the practice of promoting students who do not have basic skills to the next higher grade in order to preserve their self-esteem by continuing on with their classmates, as a culprit. Much of the discussion surrounding the report focused upon the lower SAT scores and grade inflation, which is the practice of assigning higher grades to lesser skills in order to support a normal grade curve. Educators defended education by arguing that the lower test scores resulted from more students with a wider grade range and narrower course loads taking the exams.

The report recommended sweeping reforms, first calling for higher standards with more course work in English, math, science, social studies, and computer science. Next, it demanded a stop to social promotion. Finally, the report pointed to below-average pay scales for teachers and recommended raising teacher salaries to attract more highly qualified teachers. In some cases, students entering teacher education programs were themselves the students with the lowest verbal and math scores.

Since the report, the United States has given increased funding and attention to education with mixed results. Many of the problems identified by the report continue. Social promotion continues because concern for the child's self-esteem often outweighs concern about the need for basic skills, and because an overburdened system cannot withstand holding back large numbers of students. Many states and districts have implemented programs to raise scores, and, although SAT scores have risen some, critics charge that revisions to the exam, which made it easier, caused the rise. Some districts have tied teachers' and administrators' salaries to student performance with mixed results. Myriad other issues that confront educators before educators reach the classroom, such as lack of teaching resources and even fear of student violence, further complicate education reform.
Global Perspective on Education
Increasing global commerce and competition provides much of the fuel that drives the call for education reform. Many more nations are industrializing and competing in the global market. The nations with the best minds and best education will lead the world economically. When researchers compare the performance of American students to their international counterparts, the United States scores low compared to other industrialized nations. In a frequently quoted study, 13-year-olds in Korea and Taiwan scored highest in math and science exams. Thirteen-year-olds in the United States scored near the bottom of industrialized nations.
Click here to find out more!

Experts point to parental attitudes and school systems to explain the differences. Asian parents maintain far higher expectations of their children, push them harder, and more often credit their children's success to “hard work.” American parents, on the other hand, generally harbor lower expectations, become satisfied with performance more quickly, and often credit their children's success to “talent.”

School systems also differ. In France and England, public schools provide preschool to 3-year-old children. The Japanese school year can run 45 to 60 days longer than the average American school year, with much shorter breaks. Most Japanese students also attend juku, or “cram school,” after school, where they study for several more hours with tutors to review and augment the day's schoolwork. Fierce competition exists because not all students can get into the universities, and getting into the best universities secures the student's and the family's future. Although Japanese students may out perform American ones, critics point to the high suicide rate and other social ills associated with the Japanese system.

Read more:,articleId-26916.html#ixzz0rPYZlb1K
Current Issues in Education
A number of issues and controversies now face educators and communities. Among them are discipline and security; race, ethnicity, and equality; mainstreaming; and public versus private education.

Discipline and security

Expressions of violence have increased in the culture, and so has violence in the schools. In the past, only urban or poor inner-city schools worried about serious violence. With recent school shootings in small towns from Kentucky to Oregon, all U.S. schools and districts, however small, must now directly address the increased incidence of school violence. Teachers have found children as young as kindergarten coming to school armed.

Schools have reacted decisively. To reduce the threat from strangers or unauthorized persons, many have closed campuses. Others require all persons on campus to wear identification at all times. When the students themselves come to school armed, however, the schools have been forced to take more drastic measures. Many have installed metal detectors or conduct random searches. Although some people question whether the searches constitute illegal search and seizure, most parents, students, administrators, and teachers feel that, given the risk involved, the infringement on civil liberties is slight.

Educators recognize that metal detectors alone will not solve the problem. Society must address the underlying issues that make children carry weapons. Many schools include anger management and conflict resolution as part of the regular curriculum. They also make counseling more available, and hold open forums to air differences and resolve conflicts.

School uniforms constitute another strategy for reducing violence, and public schools across the country—large and small—are beginning to require them. Many violent outbursts relate to gangs. Gang members usually wear identifying clothing, such as a particular color, style, or garment. By requiring uniforms and banning gang colors and markers, administrators can prevent much of the violence in the schools. Advocates point out, too, that uniforms reduce social class distinctions and cost less than buying designer wardrobes or standard school clothes.
Race, ethnicity, and equality

The first major examination of race, ethnicity, and equality in education came as part of the civil rights movement. Ordered by Congress, the Commissioner of Education appointed sociologist James Coleman to assess educational opportunities for people with diverse backgrounds. His team amassed information from 4,000 schools, 60,000 teachers, and about 570,000 students. The subsequent Coleman Report produced unexpected—and controversial—results, unforeseen even by researchers. The report concluded that the key predictors of student performance were social class, family background and education, and family attitudes toward education. The Coleman Report pointed out that children coming from poor, predominantly non-white communities began school with serious deficits and many could not overcome them. According to the report, school facilities, funding, and curriculum played only minimal roles.

Some studies supported the Coleman Report's findings, while others disputed them. Studies by Rist and Rosenthal-Jacobson demonstrated that specific classroom practices, such as teacher attention, did affect student performance. Sociologists reconcile the opposite findings by pointing out that Coleman's large-scale study reveals broad cultural patterns, while classroom studies are more sensitive to specific interactions. Sociologists conclude, then, that all of the factors named by the divergent studies do play a role in student success. No matter how different the study results, all researchers agree that a measurable difference exists between the performance of affluent white students and their poorer, non-white counterparts.

Even though researchers widely disputed the Coleman Report, the report did bring about two major changes:


First was the development of Head Start, a federal program for providing academically focused preschool to low-income children. This program is specifically designed to compensate for the disadvantages that low-income students face. Head Start has proven successful, and most students who go through the program as 4- or 5-year-olds continue to perform better than students not enrolled in Head Start, at least through the sixth grade.

The other consequence of the Coleman Report proved to be less successful and far more controversial than the Head Start program. In an effort to desegregate education, courts ordered some districts to institute busing—a program of transporting students to schools outside their neighborhoods, that they normally would not attend, in order to achieve racial balance. This generally meant busing white students to inner-city schools and busing minority students to suburban schools. Public opposition to busing programs remains high, and the program has achieved only modest results.

Bilingual education, which means offering instruction in a language other than English, constitutes another attempt to equalize education for minority students. Federally mandated in 1968, bilingual education has generated considerable debate. Supporters argue that students whose first language is not English deserve an equal educational opportunity unavailable to them unless they can receive instruction in their first language. Opponents counter that students not taught in English will lack the fluency needed to function in daily life. Numerous studies support conclusions on both sides of the issue, and, as funding becomes scarce, the debate will intensify.

Mainstreaming is the practice of placing physically, emotionally, or mentally challenged students in a regular classroom instead of a special education classroom. Educators continue to debate the merits and problems of mainstreaming. In general, the practice seems to work best for students who can still keep pace with their peers in the classroom, and less well for students with more severe challenges. Experts note that exceptions do occur on both accounts and recommend careful consideration on a case-by-case basis.
Public versus private

Most of the public-versus-private discussion centers on public education. One cannot ignore the effect of private education and home schooling
on American education, however. Many parents who are dissatisfied with the quality of public education, who are afraid of rising violence in the schools, or who want specific personal or religious values integrated into the curriculum, turn to private and parochial schools. The majority of private schools are religious, with the majority of those being Catholic.

Studies have found that private schools maintain higher expectations and that students in these schools generally outperform their public school peers. These findings support the Rist and Rosenthal-Jacobson studies.

Because of the success of private schools in educating at-risk students, more parents are seeking ways to afford these institutions, which have been largely available only to affluent white families who can pay the tuition costs. One proposed solution is a voucher system. The government would issue parents credit worth a dollar amount to take to the school of their choice, public or private. Advocates argue that this program would make private schooling more available to poorer families and create more equal opportunities. Critics charge that such a policy would drain public schools of needed funding and further erode public schools. The vouchers would not cover the entire cost of private school, and therefore still would not put private schooling within the reach of poorer families. The program would result, opponents argue, in further segregation of schooling. Other public school solutions include magnet schools that provide a selective academically demanding education and superior facilities for qualified students, charter schools that offer flexible and innovative education independent of the traditional rules and regulations governing public schools, and interdistrict and intradistrict enrollments that permit any eligible student in one school district to apply for enrollment in any district school or program.
Adult Development
Adulthood is primarily a time of determining lifestyles and developing relationships. Among other things, most adults eventually leave their parents' home, develop a long-term romantic relationship, and start a family, creating a new home.
Click here to find out more!

Research by sociologist Daniel Levinson determined the stages of adult development, as presented in Table 1 .
TABLE 1 Levinson's Stages of Adult Development


Levinson's Stage


Novice phase of early adulthood


Early adult transition


Entering the adult world


Age-30 transition


Culmination of early adulthood


Settling down


Midlife transition


Entering middle adulthood


Age-50 transition


Culmination of middle adulthood


Late adult transition


Late adulthood

These stages are generally accepted by researchers today in seeking to explain and evaluate adult development

Read more:,articleId-26919.html#ixzz0rPa4DWfm
Early Adulthood: Age 17–45
An important aspect of achieving intimacy with another person is first being able to separate from the family of origin, or family of procreation. Most young adults have some familial attachments, but are also in the process of separating from them. This process normally begins during Daniel Levinson's early adult transition (17–22), when many young adults first leave home to attend college or take a job in another city.

By age 22 young adults have attained at least some level of attitudinal, emotional, and physical independence. They are ready for Levinson's entering the adult world (22–28) stage of early adulthood, during which relationships take center stage.

Read more:,articleId-26920.html#ixzz0rPaNTHvE
Relationships in Early Adulthood
Love, intimacy, and adult relationships
go hand-in-hand. Psychologist Robert Sternberg proposed that love consists of three components: passion, decision/commitment, and intimacy. Passion concerns the intense feelings of physiological arousal and excitement (including sexual arousal) present in a relationship, while decision/commitment concerns the decision to love the partner and maintain the relationship. Intimacy concerns the sense of warmth and closeness in a loving relationship, including the desires to help the partner, to self-disclose, and to keep the partner in one's life. People express intimacy in three ways:


Physical intimacy involves mutual affection and sexual activity.

Psychological intimacy involves sharing feelings and thoughts.

Social intimacy involves enjoying the same friends and types of recreation.

The many varieties of love described by Sternberg consist of varying degrees of passion, commitment, and intimacy. For example, infatuation, or “puppy love”—so characteristic of adolescence—involves passion, but not intimacy or commitment.

In addition to love and intimacy, sexuality is realized during young adulthood within the context of one or more relationships, whether long- or short-term. Although adolescent sexuality is of a growing and maturing nature, adult sexuality is fully expressive. The following sections discuss some of the more familiar types of adult relationships.

Today, many people are choosing singlehood, or the “single lifestyle.” Regardless of their reasons for not marrying, many singles clearly lead satisfying and rewarding lives. Many claim that singlehood gives them personal control over their living space and freedom from interpersonal obligations. Today the number of singles in the United States remains at about 26 percent of men and 19 percent of women in the 1990s staying single for at least a portion of adulthood. Eventually, approximately 95 percent of Americans will marry.

Most singles date; many are sexually active, with the preferred sexual activities for singles remaining the same as those for other adults. Some singles choose celibacy—abstaining from sexual relationships.
Cohabitation and marriage

Cohabitation and marriage comprise the two most common long-term relationships of adulthood. Cohabitors are unmarried people who live together and have sex together. More than 3 million Americans (most between the ages of 25 and 45) cohabitate. Many individuals claim that they cohabitate as a test for marital compatibility, even though no solid evidence supports the idea that cohabitation increases later marital satisfaction. In contrast, some research suggests a relationship between premarital cohabitation and increased divorce rates. Other individuals claim that they cohabitate as an alternative to marriage, not as a trial marriage.

The long-term relationship most preferred by Americans is marriage. More than 90 percent of Americans will marry at least once, the average age for first-time marriage being 24 for females and 26 for males.

Marriage can be advantageous. Married people tend toward healthier and happier lives than their never-married, divorced, and widowed counterparts. On average, married males live longer than single males. Marriages seem happiest in the early years, although marital satisfaction increases again in the later years after parental responsibilities end and finances stabilize.

Marriage can also be disadvantageous. Numerous problems and conflicts arise in long-term relationships. Unrealistic expectations about marriage, as well as differences over sex, finances, household responsibilities, and parenting, create only a few of the potential problem areas.

As dual-career marriages become more common, so do potential complications. If one spouse refuses to assist, the other spouse may become stressed over managing a career, taking care of household chores, and raising the children. As much as Americans may hate to admit this fact, women in our culture still bear the primary responsibilities of child rearing. Conflicting demands may partly explain why married women with children leave their jobs more often than childless and single women.

Multiple roles, however, can be positive and rewarding. If of sufficient quality, these roles may lead to increased self-esteem, feelings of independence, and a greater sense of fulfillment.
Extramarital relationships

Severe problems in a marriage may lead one or both spouses to engage in extramarital affairs. Nonconsensual extramarital sexual activity (not agreed upon in advance by both married partners) constitutes a violation of commitment and trust between spouses. Whatever the reasons, nonconsensual affairs can irreparably damage a marriage. Marriages in which one or both partners “cheat” typically end in divorce. Some couples may choose to stay together for monetary reasons or until the children move out. On the other hand, consensual extramarital sexual activity (“swinging”) involves both partners consenting to relationships outside of the marriage. Some couples find this to be an acceptable solution to their marital difficulties, while others find it to be detrimental to the long-term viability of their marriage.

When significant problems in a relationship arise, some couples decide on divorce, or the legal termination of a marriage. About 50 percent of all marriages in the United States end in divorce, the average duration of these marriages is about 7 years.

Both the process and aftermath of divorce place great stress on both partners. Divorce can lead to increased risk of experiencing financial hardship, developing medical conditions (for example, ulcers) and mental problems (anxiety, depression), having a serious accident, attempting suicide, or dying prematurely. The couple's children and the extended families also suffer during a divorce, especially when disagreements over custody of the children ensue. Most divorcees, their children, and their families eventually cope. About 75 percent of divorcees remarry, and most of these second marriages remain intact until the death of one of the spouses.

Friends play an important role in the lives of young adults. Most human relationships, including casual acquaintances, are nonloving in that they do not involve true passion, commitment, or intimacy. According to Sternberg, intimacy, but not passion or commitment, characterizes friendships. In other words, closeness and warmth exist without feelings of passionate arousal and permanence. Friends normally come from similar backgrounds, share the same interests, and enjoy each other's company.

Although many young adults feel the time pressures of going to school, working, and starting a family, they usually manage to maintain at least some friendships, though perhaps with difficulty. As life responsibilities increase, time for socializing with others may be at a premium.

Adult friendships tend to be same-sex, non-romantic relationships. Adults often characterize their friendships as involving respect, trust, understanding, and acceptance—typically the same features as romantic relationships, but without the passion and intense commitment. Friendships also differ according to gender. Females tend to be more relational in their interactions, confiding their problems and feelings to other females. Males, on the other hand, tend to minimize confiding about their problems and feelings; instead, they seek out common-interest activities with other males.

Friends provide a healthy alternative to family members and acquaintances. They can offer emotional and social support, a different perspective, and a change of pace from daily routines.
Social Change Defined
Social change refers to any significant alteration over time in behavior patterns and cultural values and norms. By “significant” alteration, sociologists mean changes yielding profound social consequences. Examples of significant social changes having long-term effects include the industrial revolution, the abolition of slavery, and the feminist movement.

Read more:,articleId-26951.html#ixzz0rPcBmHyv
Supporting users have an ad free experience!