All legislation sends a message. American child labor laws send a clear one: everyone under the age of 18 needs to be protected from the capitalist exploitation known as “work.”
The Fair Labor Standards Act (FLSA) was signed into law by President Franklin Roosevelt during the Great Depression; it restricts working hours, occupations, and employers for all children under 12, almost all teens from ages 13-16, and most teens from 16-18. Only industries that had strong lobbying institutions in the 1930s were spared: some branches of agriculture, congressional pages, newspaper deliverers, and a few other minor employment opportunities.
As with most left-leaning restrictions, regulations, and redistribution schemes, child labor laws seem morally justified, sound in principle, and even an escape for youngsters trapped in what at first glance appears to be an eternal cycle of low-level education and subsequent low wages.
Legislation like the FLSA, however, is an example of symbolism over substance, a law chasing a trend, and an attention-seeking Washington attempting to take credit for a pre-existing societal shift. By 1930 only 6.4% of males and a diminutive 2.9% of females under the age of eighteen worked, indicating something that economists have been trying to explain to labor regulators for decades: parents and children usually make the best economic decisions for themselves. One-size-fits-all legislation like the FLSA usually fits no one and inhibits the very progress that makes it possible for a child or teen to support himself or herself and achieve success.
Still a divisive issue in 2014, child labors laws have elicited much argument in the past, including Rep. Fritz G. Lanham’s satirical 1924 argument against the FLSA:
Consider the Federal agent in the field; he toils not, nor does he spin; and yet I say unto you that even Solomon in all his populous household was not arrayed with powers like one of these.
Children, obey your agents from Washington, for this is right.
Honor thy father and thy mother, for the Government has created them but a little lower than the Federal agent. Love, honor, and disobey them.
Whatsoever thy hand findeth to do, tell it to thy father and mother and let them do it.
Six days shalt thou do all thy rest, and on the seventh day thy parents shall rest with thee.
Go to the bureau officer, thou sluggard; consider his ways and be idle.
Toil, thou farmer’s wife; thou shalt have no servant in thy house, nor let thy children help thee.
And all thy children shall be taught of the Federal agent, and great shall be the peace of thy children.
Thy children shall rise up and call the Federal agent blessed.
Lanham’s opposition to child labor laws was well-founded.
Good, Better, and Best
“Child labor laws sound great when you hear the talking points and look into the harsh reality of teens and children left with no other decision but to work, in historical context or even in today’s world,” said Steven, a young Lumberton resident, “But it’s a question of good, better, and best: if your alternatives were either to starve, to work at a farm for very low wages, or to work in a factory for sufficient (albeit still low) wages, which would you pick? If going to school meant starvation, would you still want to ban child labor?”
Situations vary drastically. Child labor laws cannot accommodate the millions of unique children, thousands of industries, hundreds of education levels, and the inimitable factors that combine to create a child’s or teen’s lifestyle. While a teen may have poor discernment, they understand that survival trumps education. (Not to mention that in third world countries, education is expensive, rare, and typically of low quality in the first place).
David Roodman explains:
We are all descendants of children who survived to adulthood only by laboring, whether as farmers or herders or gatherers. Only with their labor could the family subsist. I look forward to the day when there is no child on earth for whom this is the best choice. But we are not there yet. And we are not as close as you might think. Going by the numbers, the world has made great progress getting kids into school … how quick should we be to tell parents struggling under circumstances far different from our own what the right choice is?…
Self-interest is a crucial factor that makes child labor laws unnecessary and little more than a symbolic burden on the American teenage population. Getting a job before age 12 in a third world country usually signifies unfortunate circumstances; here it would almost always merely demonstrate a desire to earn a few extra bucks in the summer months. If the situation was genuinely bad enough, would it not be better anyway for a young adult to get a job than to lose everything?
Maybe a teen wants to learn more about his future career, practice a skill, or even just make some money from expertise he already has. In the United States, there is almost never a case where a child needs to support himself: but there are plenty of “underage” folks who want to save up for college and build up an impressive resume before age 20. Every situation is drastically different, and child labor laws are testimonies to the refusal of left-leaning regulators to realize that fact.
When a child or teen decides to start working a job—including in a third world country or developing nation—it may or may not be a good move to make, but it may be the best choice that this child has: child labor laws can limit or destroy these options and make it impossible for young people to get a firm footing on life.
Detrimental Symbolism Over Substance
Child labor laws limit options because they work on the assumption that all career choices require the same amount of training and experience. The trade-off might be worth it if the laws actually did something positive; however, they were not responsible for the drop in American child labor nor do they accomplish anything beneficial in today’s economy.
In other words, child labor laws are a complete flop and little more than a symbolic attempt to show that Washington “cares” about children in desperate situations. Oddly enough, Congress and FDR saw fit to eliminate some children’s best option during the Great Depression to show that they “cared.” Adopting child labor laws denies the fact that there are situations wherein a job before the age of 16-18 is the best option, and ignoring such circumstances exist is a demonstration of either callous disregard or deliberate ignorance.
From 1820 to 1930 child labor became prevalent, peaked, and then plummeted to almost nothing—all before the FLSA and without restrictions on labor. United States child labor laws were nothing but symbolic.
In “State Child Labor Laws and the Decline of Child Labor” (Explorations in Economic History) Carolyn Moehling explains that the employment rate of 13-year olds around the beginning of the twentieth century did decline in states that enacted age minimums of 14, but so did the rates for 13-year olds not covered by the restrictions. Overall, laws are linked to only a small fraction of the decline in child labor.
Curiously, the children “of the masses”—the commoners, the general public—had more money, more luxuries, more food, longer lifespans, more access to medical care, more clothing, and more education than any generation preceding them. The aristocracy has generally had the most access to education in almost all given historical settings, but in America, things were changing. The increased economic activity and burgeoning scientific discoveries of the late 19th century and early 20th century were giving children and teens not only a chance to survive, but a place to work and time to study. Somehow it happened without child labor laws.
The FLSA was scarcely a contributing reason for the drastic reduction and practical elimination of child labor between 1880 and 1940. Economists attribute the drop in child labor to economic growth (which brought rising incomes, shorter hours, and larger schools) and industrialization. It allowed parents the financial luxury of keeping their children and teens out of the workforce, away from the farm, and instead in school.
The statistics prove that capitalism, not child labor laws, ended the fifty-year reign of child labor in the United States. Apparently these laws offer no benefits and nothing but detriment.
Child Labor Around the World: the Left’s Hypocrisy
While the Socialist Labor Party of America is adamant that “Child Labor [is] Still America’s Shame,” the fact is that the United States has one of the lowest child labor rates in the world. It was heading towards that ranking even before the FLSA was made law. This can be attributed to American free markets (currently our economic system most resembles interventionism, not laissez-faire capitalism) and technological innovation.
Daniel De Leon, an American socialist, said that “Socialism alone is the remedy for child labor.” Not to burst his bubble, but the worst child labor offenders in the world are socialist states, communist regimes, or outright dictatorships, where children are not banned from jobs but are instead required to work: North Korea, China, Cuba, Somalia, Ethiopia, Pakistan, and Afghanistan have the largest child workforces, for example.
Communists, socialists, and most left-leaning political parties have stolen the moral high ground, claiming that they oppose child labor and will pass laws to stop it. In reality they mean that they do not oppose child labor, but rather children working for the private sector. They will stifle or abolish private property, employment, and salaries to prevent this “atrocity”.
While banning child labor can actually place children in even worse situations than they were already in, forcing a teen or child to work—as communists are fond of doing—is much more devastating. Families split up, teens are abandoned altogether, and children are left to fend for themselves. Mandating labor requirements always has terrible aggregate results.
In the United States, children are not allowed to make the choice to work. In North Korea, however, the situation is inverted: work is required by law. State control over this delicate and formative life decision is a violation of inherent rights either way.
One excuse for the lobbyist-motivated child labor ban is so that “our youth can receive a quality education.” This is no reason to restrict nearly all young adults under age 18 from working. Some children graduate early—maybe even by age 14—and are left with a few years of nothing before they head to college. Others have plans for occupations like leather-working, carpentry, or plumbing, and while a “quality education” is important to finish, there’s not a reason they cannot practice their craft for pay.
While Washington may not be creative enough to think of situations wherein it is possible to both work a part-time job and attend school at the same time, students are. Despite the fact, it’s still illegal.
A summer job can be better education than schooling, and in some cases manual labor is very convincing to pre-teens and teens who don’t feel motivated to do well in school or get a college degree. Child labor laws ignore this, making it impossible for students who are ahead, graduated early, need experience in their future field, or a slight motivation to study harder to avoid a lifetime of drudgery.
A “quality education” sometimes needs to include hands-on opportunities and the responsibilities that come with a job. Child labor laws are there to ensure that no such thing happens.
A Question of Morals
Many people support child labor laws for understandable moral reasons—perhaps out of concern for the teen or child (probably the biggest worry) and concern for the situations that might have driven them to employment.
If there was a situation so dire that a person under the age of 18 was seeking employment, consider this: if the situation is so bad that they are needing to support themselves, what good would it do to ban the employment option?
It is not the question of morals so much as it is the question itself. Allowing each individual and family a choice is infinitely better than guessing the majority’s decision and making other options criminal.
In desperate settings, child labor laws only tie the hands of young adults. In other circumstances—like a summer or after-school job—the law only serves to prevent good things from happening, which leads to an important issue: the work ethic.
The Work Ethic
“What lesson do we impart with child-labor laws? We establish early on who is in charge: not individuals, not parents, but the state. We tell the youth that they are better off being mall rats than fruitful workers. We tell them that they have nothing to offer society until they are 18 or so. We convey the impression that work is a form of exploitation from which they must be protected … We rob them of what might otherwise be the most valuable early experiences of their young adulthood,” said Jeffery Tucker.
Work is not exploitation, but child labor laws assume that it is: it engenders an entitlement mindset, and as Tucker mentioned, it does send a message that young people are worthless until they “are 18 or so.”
Responsibilities, schedules, deadlines, etiquette, and customs of the business world are only a few things that a part-time job could teach a young adult. Junior high or high school students searching for career options, wondering where they would do well, or questioning their prospective degree choices can benefit much from interning, getting a part-time job, or even just working at a fast food joint. Any and all work experience contributes to influence another employer that a worker is worth hiring.
Empty resumes are rarely noticed, and when it’s illegal to add anything until you’re past 16 years of age it can be difficult to accumulate enough experience to know where to go with life in time to make a decision about college.
President Obama refers to child labor laws as an example of “common sense rules of the road that strengthen our country without unduly interfering with the pursuit of progress and the growth of our economy.” Interference in the right to work and the right to hire is far from a “common sense” economic policy that strengthens the nation.
“Neither 16 or 18 are magic numbers—they are arbitrary limits thought up by regulators. ‘Child’ labor laws exclude vast portions of our population from the labor market, preventing young adults from being paid for their services and thus entirely eliminating the profit motive, meaning that most work is off-limits and the jobs available are voluntary. It should be no surprise to us that teens stay at home all summer and play video games,” said an anonymous Texas policy analyst, “The innovation and competition that young adults could bring to the workforce is lost to Mario Brothers because of ‘child’ labor laws.”
The Last Word
Child labor laws allegedly protect young people from ruthless employers. Reality, however, is a different matter.
On one end of the spectrum, youngsters who might face difficult financial situations or other circumstances are left with few options. Eliminating employment or limiting hours when a job is likely the only way out is hardly a good way to help them out.
On the other hand is a young adult maybe not dealing with extenuating circumstances but merely trying to earn money, find experience, or decide what to do with life, and “child” labor laws bring out the worst in all of these situations.
It may seem that the FLSA and other labor regulations brought an end to an era of child labor, but again, it was the state chasing a trend to sway popular opinion and bolster public support, not society bowing to the state. Laissez-faire capitalism did not create child labor on its own, but the free market ultimately ended the practice—not the state.
The largely symbolic child labor laws in the United States limit economic progress and innovation, harm the work ethic, and on top of that send a message of worthlessness to young adults—it’s long past time to do away with the FLSA and its kind.
Washington tells “worthless” young adults to wait for employment and refuse legal payment for services. Young adults should tell Washington to either cut it out or get out.
“From colonial times until the 1940s, malaria was the American disease,” said the late Dr. Robert Desowitz, an expert in medical parasitology. From prehistoric days to the baby boomer generation, malaria (also known as yellow fever) claimed more victims than any other infectious disease. The medieval and colonial eras—replete with bogus science little more than old wives’ tales, abominable sanitary and hygienic precautions, and hazardously overcrowded cities—were conducive to outbreaks like the infamous 1793 Philadelphia epidemic.
Virtually eliminating the threat that malaria once posed, many crucial medical and scientific advancements would have never existed had the 18th-century United States government imposed requirements for malaria treatment—the untimely intrusion would have swamped healthcare progress and scientific efforts. Whether from malicious motivations or the commendable desire to keep citizens healthy, regulation means more malaria and no progress, universal detriment to humanity and the consistent result of government attempting to make health mandatory.
In the space of four months in 1793, malaria killed over 5,000 Philadelphians. The scientific consensus prompted diagnoses of the imbalanced humors phlegm, choler, bile, and blood. Revolutionary War hero Dr. Benjamin Rush hypothesized that street stenches disequilibrated the humors and sparked the outbreak; city government attempted to lessen the odoriferousness of the rudimentary sewage systems. Not surprisingly, the city’s misinformed efforts left malaria unchecked.
This raises an important question: what if an equally misinformed 1793 equivalent of the Food and Drug Administration (FDA) had become involved in ending the epidemic?
At the time, most doctors employed ineffective herbal tea to treat yellow fever; others found more jeopardizing medicines. Known as the “prince of bleeders”, politically connected Dr. Rush staunchly advocated a mercury and jalap poison purge for curing the rampant disease.
Assuming that a 1793 FDA was possible, and that the likes of Dr. Rush would not overthrow the tyrannical institution, the administration would likely approve a poison purge and a few herbal teas. Other treatments and medicines would wait years for testing before their market debut. Pouring into approved projects aligned with government goals, funding would for centuries crush and “disprove” alternatives.
Locked in interventionist cronies’ stagnant spell, medical science would be forced into stalemate. If existent at all, progress would inch along at the will and in the shape of government agenda. Government is an unauthorized and incompetent failure when it limits individuals’ health decisions—evidence is ample in Canadian healthcare, capable of working miracles but otherwise smothered in thousands of pages of regulations.
For over 70 years, science has had malaria under its thumb. Clearly state intrusion would have rendered this victory, the easily manufactured pesticide dichlorodiphenyltrichloroethane, unlikely or impossible.
Whether levying taxes on fatty foods, capping the size of sugary drinks, or rejecting new cancer drugs, government has no incentive whatsoever, other than lobbyists’ well-lined wallets, to accept change or innovation. Dr. Joseph Mercola, a popular but controversial proponent of alternative medicine, commented, “The FDA will not protect your health, nor will any other government agency … the government is interested in promoting drug company profits, not promoting your health.” (Mercola)
As John Stossel explains in his recent book, No, They Can’t, “If government ran health care, those advances would slow to a crawl, because governments don’t innovate. They just keep doing what did last year.” (Stossel) Burgeoning medical and scientific knowledge alone considered, a cure for cancer is more likely in 2014 than ever before. The FDA’s tainted scrutinizing, however, is lowering the chances of such a breakthrough.
William Faloon remarked, “A major reason so many cancer patients die today is an antiquated regulatory system that causes effective therapies to be delayed (or suppressed altogether).” (Faloon) Stossel illustrates why the dawdling regulatory system forms in the first place: caution. Natural hesitance “makes it easy for government to leap in and play the role of protector,” he explains (Stossel). The FDA takes, on average, 12 years to approve a new and potentially life-saving drug. With an average cost of bringing a new drug to market at $1.3 billion, major pharmaceutical companies sometimes pay up to $11 billion in the never-ending quest to appease FDA employees. In twelve years, cancer kills about 91.2 million patients worldwide.
That the dismissive gesture of a bureaucrat’s annoyance and the insidious corruption of a slow-witted agency could be directly responsible for millions of deaths is not only preposterous, but unacceptable. Brenna Liepold, a teenaged cancer victim who died in 2003 at eighteen, lamented: “Modern science cannot offer me a cure …” (Painter). Tragically, Liepold’s death is largely attributable to the FDA’s sluggish approval system. Government involvement in medical science and an individual’s healthcare choice is an intrusion threatening life itself: FDA agents are mere human beings in superhuman positions that entail third-party life-and-death decisions.
One can only ponder our national condition had the FDA, with its present jurisdiction and imperium, come into existence 221 years ago. Presupposing that another country’s doctors would not intervene, life expectancy would still be 35 years; disease would be attributed to imbalanced humors; and Philadelphians would still chew garlic and burn gunpowder to ward off yellow fever. All things considered, legislators’ good intentions usually precipitate catastrophic debacles like the FDA. When the state invites itself into one of the most personal matters of life—the health of citizens—the state has adopted compulsory death, disease, and stagnation as official policy.
Ultimately, government overstepping its bounds and taking on the role of supreme health authority is more destructive than all the infected mosquitoes on the planet. Perhaps totalitarianism should be known as the American disease, now that malaria is obsolete in this role. One thing is certain about this ideological affliction resulting in inconceivable physical tragedy: the American people should, with the determination of the dauntless individuals who forced the downfall of malaria’s empire, focus their attention on eradicating and conquering this strain of liberalism forevermore.
It’s a commonly quoted fact that Texas, if it were a sovereign nation, would have the 14th largest economy in the world, coming in immediately behind Russia, Canada, Australia, and India. Listed as one of the top four industries in the state, the energy industry’s activity has contributed a great deal of the crucial jobs, capital, and innovation required for the state to achieve such a ranking.
However, energy success is in spite of Washington: federal regulators are waging war against Texas energy and the higher standard of living, lower prices, and greater efficiency it brings about. Put simply, many environmental regulations are a bureaucratic tip of the hat to the free market’s enemies.
With a friendly business climate and moderate taxation, Texas has experienced surprising prosperity—even through the recession in 2008—and continues to grow despite federal interference. Like the Keystone Pipeline, job-creating energy projects are numerous in Texas (the oil and gas capital of the United States) but the federal government halts, stifles, over-regulates, and heavily taxes such efforts.
In 2012, the Texas Independent Producers & Royalty Owners Association (TIPRO) reported that the oil and gas industry employed 379,800 people in 2012; Texas added the most new jobs in the oil and gas industry in the first half of 2012, rising by 34,600. Two of the world’s ten biggest refineries are in Southeast Texas, and the state leads the nation in crude oil production and refining.
The Environmental Protection Agency (EPA), a grossly unconstitutional bureaucracy influenced not by reality but by far-flung special interests groups, leads the assault on Texas energy. For instance, the EPA forced the Cross-State Air Pollution Rule on Texas because of a hypothetical connection between the state’s emissions and a pollution monitor in Granite City, Illinois. The costs of complying add up to $2.4 billion every year—for that rule alone. Commentators were of the opinion that the EPA was “picking on Texas.”
Greenhouse gas regulations influenced many manufacturers to scale back expansion projects because of the compliance costs. Recent ozone standards will likely kill 7.3 million jobs by 2020 and on top of that will add over $1 trillion in regulatory costs per year. Estimates put the costs at hundreds of thousands of jobs.
The Las Brisas Energy Center in Corpus Christi, a project of Chase Power, closed because of the “insurmountable regulatory framework erected by the EPA,” as the Washington Times reports Chase Power CEO Dave Freysinger saying. The regulation destroyed around 3,900 prospective Texas jobs. The ditched project is not an isolated incident.
“The Las Brisas Energy Center is a victim of EPA’s concerted effort to stifle solid-fuel energy facilities in the U.S., including EPA’s carbon-permitting requirements and EPA’s New Source Performance Standards for new power plants,” he continued, “These costly rules exceeded the bounds of EPA authority, incur tremendous costs, and produce no real benefits related to climate change.”
The damage done by the EPA on Texas jobs and state prosperity is incalculable; it has little foundation in science; and even assuming that the science was correct, the harmful regulations would do little to stop pollution or “climate change.”
Enemies of the free market may not be concerned about the environment, but they are interested in crippling the remnants of free nation and backhanding American energy prices. The question at hand is not one of clean air, but of freedom: does government control really help anything? Look to Chernobyl.
The EPA may be well-intentioned, but more than likely not. Controlling the energy sector is the easiest way to get a grip on the economy; extreme leftists are aware of that. It’s a handy foot-in-the-door trick that makes it possible, nay, likely that more controls can be placed on other industries. Texas cannot stand for this. While legislation like the REINS Act and the American Energy Renaissance Act (introduced by Texas Senator Ted Cruz) will definitely relieve the energy sector’s regulatory nightmare, in the end the answer lies in abolishing the agency itself.
Nearly everything is bigger in Texas, but unemployment lines, electricity bills, gas prices, and federal jurisdiction should have an exemption.
Meet Carl. He’s a shoemaker in the small country called Hypothetical. If you have read this, you’ve already become acquainted with him.
The country of Hypothetical has strict protectionist policies that shield national shoe manufacturers, including Carl, from foreign competition—particularly because foreigners have a comparative advantage in that field. The cheap foreign imports might strike a death blow to Hypothetical’s shoe-making industry; more than likely it will put Carl out of business.
Unfortunately, shoes are pretty expensive in Hypothetical. Around a tenth of the population walks around barefoot because of it. Another 89.5% of Hypothetical’s residents fare quite nicely, although they say they spend more on shoes than they wish.
When Hypothetical’s parliament decides to reduce import tariffs and restrictions, suddenly cheap shoes begin flooding the country. Some of them are low-quality, mass-produced nightmares that most would never purchase, but others are of comparable or higher quality than Hypothetical’s own products. Unfortunately for Carl, all of these shoes, even those of the highest quality, are cheaper than his own.
Soon only Carl’s most faithful customers are coming to his shop, now considered to have obsolete, over-priced shoes. Most consumers now buy from Nike, Carl’s main competitor.
Carl is forced to close shop.
This is a huge blow to Carl (and to his most faithful customers, who prefer his products). His friends, acquaintances, and anyone who hears his story can’t help but feel sorry for him: however, none of these folks are willing to stop buying from Nike. They would rather have shoes for the whole family than go back to scrimping for months to buy a set of boots from their friend Carl. They can’t go back to buying from him.
Carl’s means of income has basically collapsed before his eyes. His options are not very lucrative compared to his previous job: he narrows it down to either working for Nike or learning a different trade. Obviously this is a huge blow to him and his family.
Freeing up international trade has outstanding and worthwhile positive effects—as mentioned here—but it also has painful effects for some manufacturers. Overall, trade liberalization is beneficial; but as with every decision, there are some negative side effects. Oftentimes libertarians and capitalists downplay the fact that some will suffer when such trade decisions are made and that there are detrimental situations that can result from freeing up international trade.
In the end, the consumers and the 10% of Hypothetical’s citizens who had previously gone barefoot benefit, Carl is eventually scraping along in some other area, and the economy as a whole is on the rise.
Parliament, was it worth it?
Ask Carl and his fellow manufacturers, and for them, it’s no fun. In conclusion, the free market is not perfect nor is it painless. Sometimes positive reforms are agonizing to portions of the economy: cutting government spending has a detrimental effect on IRS agents—but that doesn’t mean cuts are unfounded or unnecessary, nor does it mean that the entire economy will hurt for it. Free market truths like competition, supply and demand, and pricing are other examples of occasionally non-enjoyable aspects of capitalism that are hurtful to some and beneficial to the rest—not ideal, but better than anything else available to mankind. Think of what happened to the United States piano-making industry, the horse-drawn buggy industry, the record industry, and even cassette tape manufacturers: innovation, improvement, and a better world at the expense of buggy-builders.
Carl deserves our sympathy. He’s an example of the effects of real-life economics, which can be unpleasant business. But as for the newly-shoed poor and the liberated consumers, they’re pretty ecstatic about Nike. You should be, too.
Ever since the days of the Renaissance, trade between foreign—and occasionally hostile—nations has been a disputed question: should it be legal or illegal, encouraged or discouraged, and foremost, is it beneficial for both parties involved? As technology makes global trade more prevalent, the objections to unrestricted trade and the complaints against import duties and tariffs have led to complicated trade deals that are difficult to navigate. Typically the trade deals are a boon to the participating countries—but there are some who are inclined to disagree, for understandable, well thought-out reasons.
However, despite the fact that many opponents of globalization have rational concerns about international trade, the matter’s underlying aspects render them invalid.
The trade deficit can, in no wise, actually be harmful. The definition of trade deficit is “the amount by which a country’s imports exceed its exports,” and despite American unions’ protests, this is neither an undesirable effect or a detrimental one.
The free market principle of mutually beneficial exchange illustrates the point: when one purchases a gallon of milk at the store, one values the milk more highly than one values the money used in its purchase; the store values the acquired money more than it values the milk. Assuming that a trade deficit is harmful, unsustainable, or that it kills jobs assumes that one side of a transaction is always coming out behind. Thus, as in the example of the milk purchase, one side or the other will always be harmed or cheated—which is clearly not the case. The assumption that there is always a winner and a loser in a transaction (there isn’t) and that furthermore, the winner is the one with cash (even further from the truth) fuels the socialist ideology and even forms the basis for movements like Occupy Wall Street. Trade deficits are infinitely sustainable and hardly harmful.
Is independence ideal?
A country’s energy independence is likely a wise consideration; however, banning imports of all energy forms would obviously drive up the prices, limit options, and harm the economy by diverting funds from other, more wise, uses and instead squander savings merely to keep the lights on and the heater running—when energy independence may not even be necessary. Of course, no such regulation thus far has been attempted, but it serves to show that restrictions on imports in other areas are also painful to the consumer, but lucrative to the lobbyists and manufacturers who negotiated the trade ban with legislators. (The automobile industry in the past has relied heavily on such deals.)
A general rule accepted by both left-leaning and right-leaning economists is the law of comparative advantage, which holds that participants in the market all benefit when they specialize in occupations and activities where they already hold an advantage. An extremely popular actor who makes $3,000,000 every year, for instance, could still turn a profit and make money if he or she obtained a job at Wal-Mart—in which case the opportunity cost of working at Wal-Mart would cost millions more dollars than the job is worth. Not only is the popular actor defying the rule of comparative advantage (something that is perfectly acceptable to do, but is generally unwise in a financial sense), the actor is taking an economic wallop for choosing to defy it, evidenced by the opportunity cost.
If one’s occupation is one’s specialty (and the most money is made in its exercise) it makes little sense for one’s attention to be diverted from baking, painting, building, or teaching—and returning to the first example—to tend to a cow several hours each day. In the end, a cow would do more financial help than harm, although it would provide a greater degree of independence. But independence is painful. For instance, in the Dark Ages, many people were independent—but they died young, suffered much, starved often, and innovated little. In wartime or in instances of isolation, independence may be necessary; in others, national economic independence is impossible, impractical, and regressive, because usually regions and countries likewise have a comparative advantage in some area and independence would force them to shy away from the specializations. The United States and Russia are two examples of geographically large and varied nations that cannot fit in one category; but regions within these countries have specialties, as do the towns and communities within the regions. The United States could do well as an economically independent nation—although it would not do as well as it does now. If the idea of “national independence” were taken far enough, autarky would apparently be perfection.
Cheap, foreign imports
Cheap, foreign imports are generally portrayed as the “working man’s nightmare” by groups intent on convincing legislators to approve high import tariffs. However, this is more of a talking point than a truth.
When products typically very expensive in Country A are made available by cheap foreign imports from Country B, all parties benefit except for the manufacturers in Country A, who have a strong incentive to appeal to the “law.” When legislators respond, however, this is an indication that competition is undesirable—which indeed, it is not. In the end, Country A’s consumers will get a better deal, their money will be freed up to invest in other at-home objects (meaning more exports), and more economic growth.
If imports increase, exports also increase and accompanying foreign investments also increase simultaneously, and by a similar amount. The inevitable employment reductions resulting from an increase in cheap imports are entirely offset by employment increases in export industries: other industries in which Country A has the advantage are now receiving cash once tied up in the expensive, homemade goods now replaced by Country B’s cheaper, better alternatives.
Nike vs. Carl the Shoemaker
Lastly, capitalism is an unfair economic system: but it most be noted that all other economic systems are exceedingly more unfair than capitalism and free trade. While Carl the shoemaker may lose business whenever Nike imports running shoes into town, shoes will suddenly become available to the village and money once reserved for shoes only can now be put to use in other areas, including investment, entrepreneurship, or education. Should everyone suffer for the sake of one man’s inconvenience? Carl, after all, can either get a job with Nike or he can branch out into a specialty field of dress shoes, or he could move into an entirely different field. Opportunities are numerous in a thriving free market economy.
Whether the question is of trade deficits, financial independence, or cheap foreign imports, what you see at first is not necessarily reality. Looking beyond the first of a chain of reactions is necessary for financial success and economic understanding.
Not too many years ago, the United States was a primarily agricultural nation: in fact, 64% of 1850 U.S. labor force participants worked on or owned a farm. The massive shift from farmhouse to apartment building and dirt road to interstate is a well-documented one, and perhaps the most major shift in America’s economy.
In 1850 GDP per capita averaged around $2,303 (in 2009 dollars), compared to 2012’s GDP per capita at a whopping $49,226 dollars (also in 2009 dollars). Despite a sluggish economy and a flailing system of international trade, this is a massive amount compared to the flimsy 1850 total.
Although statistics and GDP measures are never accurate, beyond reasonable doubt the number reveals a very obvious and a very remarkable trend, benefiting mankind and showing the signs of a maturing, developed economy: specialization of labor.
In The Wealth of Nations, Adam Smith first pointed out that a crowd of pin-makers, each executing just one of the many functions involved in creating an 18th-century pin, would make an exponentially larger amount of pins than one worker, who Smith hypothesized would only succeed in making twenty or thirty every day.
This labor specialization philosophy led to highly efficient assembly lines and eventually to the early 20th century automobile industry’s success in mass-producing large, complex vehicles. Smith’s 1776 observation was not revolutionary but was still nothing short of insightful; considered the “father of capitalism,” Smith realized over 130 years ahead of time that specialization of labor paired with free markets would lead to extraordinary success. In assembly line entry-level jobs specialized functions are—for example—tightening a screw. In a broader sense specialization of labor means different sectors of the economy manned by very specifically trained people.
Modern agriculture, aided by constantly improving farm equipment, science, and seeds, is a far cry from the grueling autarky of 1850s. Struggling to stay afloat themselves, farmers rarely produced much excess food to sell to other members of the community. (Even then, technology, transportation, and communication were not sufficient to distribute the food where it was in demand.)
Rather than employing their time in education or recreation, families would spend from sunrise to sunset slaving away at farm chores—perhaps, some would say, this resulted in healthier, stronger children. It did not: life expectancy was around 35-38 years, and many children died before reaching the age of 12. The rural agricultural autarky allows for little to no medical progress. In survival situations, all science regresses while a single man or a single family attempts to eke out a meager living in the wild. The 1850s was nowhere near this stark an existence, but the point stands: life is hard when one is separated from market forces.
While one man is an excellent builder and the other is a skilled cook, in an autarkic situation the first man would be forced to cook rather than continue in his area of expertise, building. The second man would be forced to build rather than cook, meaning that both of them spend excess amounts of time on mediocre (maybe even inferior) products and services that would be best split between them. The 19th century was an era of discovery, and likely the most important of all advances in that hundred-year span was the widespread introduction of labor specialization, a convenient, beneficial, and life-changing free market necessity improved by agricultural technology. This allowed farmers talented in a particular trade to pursue other interests; they accepted the opportunity.
By 1860, farmers made up 58% of the population; by 1870, 53%; by 1880, 49%; and in 1930, a mere 21%. Currently farmers make up less than 2% of the population in the United States.
The dramatic transformation is no mystery.
Specialization of labor is the reason that the average American was no longer forced with the question of survival at every turn; it was the reason that the economy blossomed into an even more diverse and thriving market; it was the reason that while the number of farmers decreased, burgeoning scientific knowledge allowed fewer farmers to feed hundreds, and even thousands, more people.
Farmers everywhere were offered the benefits of specializations of labor: soil scientists, irrigation experts, livestock specialists, veterinary medicine, railroads and later automobile transportation services—as well as better weather prediction capabilities—began combining to form a new world that had never been explored before.
Family farms, although quaint and maybe nostalgic, still somewhat prevalent and quite wonderful, are not like they used to be. Even “family farms” do not operate on their knowledge and their experience alone; specialists from every field combine in different ways, in a seamless system of free market harmony (revision: free market harmony stifled by the FDA) that brings food from all over the nation and around the world to your grocery store.
For over a hundred years, remote control technology has been developing and innovating at a remarkable rate.
The inspiration for remotely controlled devices has been around for a while: in 1898 Nikola Tesla demonstrated a radio-controlled boat, which he called a teleautomaton, during a show of electric technology at MadisonSquareGarden. Tesla’s patent (U.S. Patent 613,809) explains the equipment used.
Remote controls are taken for granted by most users in the United States. Readily available since the 1950s, remote controls became prevalent after Eugene Polley and the Zenith Radio Corporation sufficiently refined a wireless system.
The “Zenith Space Command”, one of the first wireless remotes, was mechanical and employed ultrasound. Each button on the remote produce an extremely high-pitched sound; dogs could hear it, but average humans could not. The technology was innovative, but did cause some problems: music and household noises could affect the television.
By the 1970s, technology had improved by leaps and bounds: infrared light could control remote controls, many companies found. Numbers from one to nine were added, making the device easier to use. Thousands of other improvements soon gave the world remote controls compatible with different appliances. Modern remote controls now include features that allow users to operate them by Bluetooth — overcoming the previously small ranges afforded by controls using sound or light. Video game consoles and other devices detect motion and interact with the user through their movements.
Drones, airplanes, toys, phones, bombs (Jihadists have discovered remote controls), pumps, industrial mechanisms, garage doors, and gates benefit from remote control technology, albeit in an entirely different way.
Starting with pioneers such as Nikola Tesla and with no end of them in sight, remote control innovators have changed the world — and all because nobody wanted to change the television channel.
Because of a petty inconvenience, a new field of technology emerged. Big government expects citizens to suffer inconveniences every step of the way. Only a free market would attempt to fix what all other economic systems would not acknowledge as a problem in the first place.
Note: The authors realize that the American economic system has never been completely laissez-faire (no system in history has been quite that way), and for the past seventy years better fits the description of “interventionist.”