Archive

Posts Tagged ‘ethics’

Rant of the Moment: War and Taxes

11/13/2010 1 comment

I don’t always have an essays worth of thoughts on a particular topic. Even if I do, I don’t always have the time or effort to write long, thought-out, sometimes drawn-out essays. Sometimes this is due to laziness and sometimes this is due to the fact that I would rather write short soundbites about multiple topics than a long exposé on one. The rant of the moment serves that purpose. I call it a “rant” because I plan on focusing my attention on arguments or positions that annoy me or that I think are wrong or misguided in some way and because the briefness of these comments will not usually allow for comprehensive analysis. Whether or not this will actually end up being a weekly thing, a monthly thing, or a one-time thing, remains to be seen. So let me begin with:

1) The War Tax: why exactly is there no war tax in the United States? What is a war tax? It is as it sounds: a tax that is placed upon the people of a particular society when they are at war. The purpose of a war tax is, I would argue, threefold. First, to pay for the war. Wars cost billions of dollars in today’s currency and the money needs to come from somewhere. To be sure, some of the taxpayer dollars already goes to pay for military research, personnel, and so on. But that money is meant to cover the cost of a standing army at peacetime. The second purpose for a war tax is a matter of civil responsibility: a tax will ensure that the entire society plays its part in the war. Wars are fought by countries–or so one would think–not by distant so-called “volunteer” armies. This leads to the third reason: complacency. It becomes too easy for the citizens of a nation to not only start a war but let it drag on indefinitely if the war has no tangible influence on their lives (out of sight, out of mind). This is a recipe for mass injustice. I suspect that the current wars in Iraq and Afghanistan would either never have started or would have already seen their end if a war tax would have been in place. At the very least, there would certainly be a much greater call, from both sides of the political spectrum, to end these wars and a much greater hesitancy when contemplating new ones. My proposal: 10% increase to all income taxes during wartime (and perhaps an increase in or implementation of a federal sales tax) .

2) While I’m on the topic, let me address one more issue relating to the military. As insinuated above, is there, and has there ever been, such a thing as a volunteer military? We certainly like to think so. It makes us feel good when we think those killed and those injured in war were not coerced into serving but acted freely. But how free is it when the majority of those serving are from poor backgrounds? How free is it when someone sees military service as their only legitimate chance at a college education? Why aren’t the rich, preppy Harvard graduates signing up? If it was truly voluntary, shouldn’t we expect a roughly equal demographic distribution?

3) Why does it seem that when the public hears the word “philosophy” they either hear Socrates, Nietzsche, or Ayn Rand? WTF? I like Socrates, but guess what, he wrote nothing. Nietzsche seems to be talking nonsense most the time. And while I don’t know much about Rand (other than her atheistic libertarianism and the mysterious linkage between her and Objectivism, which seems to me to break down to views accepted and made famous by those long before her) I will bow to the satirical, yet very insightful Philosophical Lexicon: 

rand, n. An angry tirade occasioned by mistaking philosophical disagreement for a personal attack and/or evidence of unspeakable moral corruption. “When I questioned his second premise, he flew into a rand.” Also, to attack or stigmatise through a rand. “When I defended socialised medicine, I was randed as a communist.”

4) I recently learned that Sam Harris, famed atheist author, released a book entitled The Moral Landscape: How Science Can Determine Human Values. Without having read the book, but having read about it, I want to make a few tentative comments. First, I am a big fan of making philosophy accessible to the masses, so I praise Harris for that. Second, and less praiseworthy, the moment I read the subtitle, an uncontrollable “ugh” or more like a “guh” came out. Why do these atheistic thinkers have to be so scientistic, by which I mean an adherence to the idea that all phenomena are reducible to science? Two problems immediately stand out (and have always stood out since this type of project was attempted). (1) Science can’t answer why one ought to act morally in the first place, but perhaps even more importantly (2) it can’t, by itself, determine the moral worth of an action (it can’t determine the rightness or wrongness of any given action). From my understanding, Harris is a utilitarian, which helps him address (2). But even then it’s not science that determines human values but science + utilitarianism (an ethical theory that is itself not grounded in science).

Consider the following scenario: a young woman was kidnapped and tortured by some monstrous villain last week. Fortunately, she was saved recently and is now in the hospital just about to undergo physical and psychological examination. Now, I think we can agree that the villain’s actions were evil, immoral, wrong,  morally impermissible, and so on. But how can we tell? Well, let’s bring in the scientists. They examine her. They determine that while undergoing torture she was in immense physical pain. They also determine that she is likely to develop post-traumatic stress disorder. So was the villain’s action wrong? Yes, they say. So no need for the  moralists to come in? Not at all. Bedazzled by their confidence, I pose to them the following question: You were able to determine, by knowledge of what happened, that she underwent immense physical and emotional pain, but where in your analysis was the wrongness? I then go on to explain to them the Is-Ought Problem, about how empirical facts are not sufficient in determining the rightness or wrongness of an action. I then ask them one more question: why did you think the villain had committed a wrong action? Well, they say, because he harmed her and its wrong to harm someone in that way. Their response was rather vague but I left it at that.

So what did these scientists mean by saying that it was wrong to harm someone in that way? Being scientists who enjoy quantification, I suspect that they meant that the villainous actions were wrong because they caused the women immense pain and suffering. And when you act in such a way so as to create more pain than pleasure (or happiness or desire-satisfaction), you act wrongly. But they could have meant something else. They could have meant that  the young woman’s rights were violated and that, in itself, is what made the action wrong. Or perhaps they were focusing less on what happened to the woman and more on the negative character of the villain: his actions were wrong because they were produced by a malevolent character. Or perhaps they had more than one of these responses in mind. These responses are all derived from moral theories: utilitarianism, deontology, and virtue ethics, respectively. So while science could potentially help determine the moral worth of an action–by measuring pain states for instance–determining which theory is ultimately correct/incorrect or most plausible/implausible does not seem to be within science’s grasp. To think so is to make a categorical mistake. It is to make a false reduction. But ethics is no more reducible to science than history (think World War II) is to quantum mechanics.

There are other types of value as well, like epistemological values and aesthetic values, which certainly can’t be determined by science either. In fact, it has long been argued by some 20th century thinkers that science is guided and even grounded in our values. Reason and observation is not sufficient in determining theory choice, so the argument goes. What ultimately determines theory choice once the data is in is our values, values like simplicity, coherence, and what some have called the elegance or beauty of a theory. It is these values that allow us to choose between two theories that are otherwise consistent with the data.

Ignorance, Knowledge, and Confidence

04/08/2010 2 comments

Ignorance more frequently begets confidence than does knowledge.

This is one of my favorite quotes. It was said by Charles Darwin, but that doesn’t really matter:  the essence of the quote is by no means original to him. The essence or point of the quote is that the more you learn and the more knowledge you acquire, the less confident you become in making particular truth claims. But what sort of truth claims exactly? It cannot be just any truth claim: sometimes knowledge does beget more confidence than ignorance. For instance, the more I study a particular time in history–say, ancient Judaism–the more confident that I become that certain events transpired on such-and-such a date as opposed to another date (that the Second Temple was destroyed in 70 C.E).

Darwin’s quote is less applicable to narrow factual claims such as these and is more applicable to truth claims about broad theories. For instance, it is usually the ignorant and not the learned that think it is obvious that socialism is an evil or inadequate political theory. It is usually the ignorant and not the learned that think that humans obviously have free will or that “reality” just is “physical reality” or that utilitarianism can account for all of our moral intuitions. It is usually the ignorant and not the learned that think reconstructing particular events or individuals of the past is a straightforward matter. And although knowledge can often give us confidence that particular theories are false–six day creationism, astrological theories–it rarely gives us confidence in asserting general-sweeping truth claims about particular theories or questions.

There may be more than one reason why ignorance tends to breed confidence. I will suggest the obvious one.

One explanation is that individuals who are ignorant are those who have only been exposed, intentionally or unintentionally, to one particular set of data or one particular interpretation of the data. These individuals have a narrow view of a particular topic: they have not been exposed to (or have failed to take seriously) all particular viewpoints, their arguments, and their counterarguments. In this sense we may substitute “ignorant” for “narrow-mindedness.” To be ignorant is just to be narrow-minded. To be ignorant is just to lack awareness of particular positions and/or the arguments in favor of them. As a consequence of their narrow-mindedness, these individuals see it as obvious that one side of the debate–the only side they have been exposed to or the only side they have considered fairly–is correct. It thus becomes easy to acquire confidence, misplaced though it may be, that one’s view of things is correct.

Although ignorance is usually not something to be desired, it is often unavoidable. Given that resources and time are limited, we can only know so much. The physicist may be ignorant about biology and the biologist may be ignorant about ancient history. Although it wasn’t always the case, it is now impossible to be an expert in all fields of study. And it is for this reason that we ought to approach our intellectual endeavors with humility, or more specifically, epistemic humility (humility pertaining to our beliefs and our truth claims). And I do not think that this is merely what Immanuel Kant called a hypothetical imperative. I am not saying we ought to have epistemic humility just because it would be rational to do so (i.e. because it would help us achieve some goal). My claim, I think, is a moral and universal one (Kant’s categorical imperative): we ought to have epistemic humility just because it is the right thing to do, or in this case, have. Misplaced confidence often transforms into arrogance. But arrogance isn’t bad just because it thwarts our goals but because it is a bad character trait to have. Whether or not that is correct, I think on any ethical theory, humility will be something we ought to seek and arrogance something we ought to avoid.

It is because the learned have a broader view of things that they have much less confidence in any one particular position. They are always weighing and reweighing different arguments, leaving themselves open for new considerations. And while they often end up landing on one side of the debate, there remains within them a full awareness that their own intuitions might be mistaken or that they may have overlooked a piece of data or set of arguments. They understand full well that other individuals just as learned and just as intelligent have come to alternative conclusions. For these reason they may even opt out of taking sides altogether, settling into a bemusing agnosticism. Sometimes this agnosticism is temporary–slowly fading as one learns–and sometimes it remains indefinitely. However long the agnosticism remains, I suggest that it ought to be our starting point and ought to, perhaps, be our ending point more often than is customary. Perhaps just for rational considerations. But perhaps moral ones too.