This week, I received notice that the Attorney General’s Office has randomly selected me as a potential juror. One of the enclosed documents listed several criminal code violations that may or may not disqualify one from jury duty. My personal favorite: “Pretending to practice witchcraft.” Presumably, actually practicing witchcraft is perfectly legal. We’ve made progress.
In The Work of Art in the Age of Mechanical Reproduction, Walter Benjamin wrote the following:
For centuries a small number of writers were confronted by many thousands of readers. This changed toward the end of the last century. With the increasing extension of the press, which kept placing new political, religious, scientific, professional, and local organs before the readers, an increasing number of readers became writers — at first, occasional ones. It began with the daily press opening to its readers space for “letters to the editor.” And there is today hardly a gainfully employed European who could not, in principle, find an opportunity to publish somewhere or other comments on his work, grievances, documentary reports, or that sort of thing. Thus, the distinction between author and public is about to lose its basic character. The difference becomes merely functional; it may vary from case to case. At any moment the reader is ready to become a writer.
Benjamin wrote this in the 1930’s. I wonder what he would’ve thought about blogging and social media.
With relations between Russia and NATO deteriorating over the crisis in the Ukraine — and the fact that I recently re-watched The Hunt for Red October — I’ve been thinking about nuclear war. Specifically, I’ve been thinking about the question: What’s the probability of nuclear war? Of course, it’s quite difficult to come up with probabilities for events that (thankfully, in this case) haven’t happened, but my initial intuition is that the probability is high relative to other extinction-level threats to humanity (i.e. asteroid impact, unfriendly AI, etc.). Indeed, it’s even a bit surprising that global nuclear war hasn’t already happened. Given the geo-political climate in the latter half of the twentieth century, global nuclear war was a reasonable prediction to make (and many informed observers did, in fact, make it). Fortunately, it didn’t happen but there were a few close calls, the Cuban Missile Crisis being the most obvious example.
However, many assume that since the Cold War is over, nuclear war is not a serious existential threat to humanity. This assumption is mistaken. Martin Hellman, who has written extensively on the subject, estimates the probability of nuclear war at 1% per year. That may sound low, but consider that the probability of an asteroid impact — like the one that wiped out the dinosaurs — is about one chance in a million per century. Despite those odds, many argue that the devastating cost of such an event justifies our taking steps to prevent it. This argument, if sound, applies a fortiori to nuclear war.
In addition, the probability of nuclear war is not a constant. For example, Hellman estimates that during the Cuban Missile Crisis the probability of nuclear war was between 10% and 50%. Some would argue that ‘mutually assured destruction’ acts as a deterrent against a first strike. However, the threat of retaliation almost failed as a deterrent in the twentieth century on more than one occasion and it’s easy to imagine circumstances under which it could have failed. (Hellman relates such a scenario that actually occurred during the Cuban Missile Crisis.)
We might be tempted to think that because nuclear war hasn’t happened yet, it’s less likely to happen in the future. However, this is a textbook example of the gambler’s fallacy. Also, as Hellman points out, a significant percentage of the overall risk of nuclear war in the last half of the twentieth century, occurred over just 13 days, or a mere 0.07% of the total period. In other words, a crisis has a disproportionate effect on the risk of nuclear war. Since the cost of any nuclear exchange would be so high, it’s a risk we shouldn’t underestimate.
As per the rules, all lines are taken from search terms in WordPress Stats – no editing allowed except for truncation at the beginning and the end of phrases.
grad school failure humanities
anti capitalist philosophers
dropping out of conventional life
v for vendetta mask
non-academic jobs for philosophy phds
sell mine kidney for money in facebook
skeptic: why have you forsaken me
anti theism better god didn’t exist
sympathy for judas
some unhealthy philosophies recommended by some self-help guides
existentialist who quoted what doesn’t kill me make
“it is better to be socrates dissatisfied than a pig satisfied.”
why do humans pursue philosophy
Police shot another man in Ferguson, MO.
Meanwhile, in Iceland…
I watched Jack Ryan: Shadow Recruit last night. I was a little disappointed. The actors did a competent job, but the story and the action were pretty generic. It also continues the Hollywood trend of badly written female characters. There’s one scene in particular that sums up this problem. Mild spoilers follow, but since I don’t recommend seeing this one, don’t be dissuaded from reading further.
Jack works for the CIA, which means lying to his girlfriend. She suspects that he’s cheating on her, so she confronts him. He finally tells her that he’s a spy, which makes her very happy since it means (supposedly) that he’s not cheating on her. (This inference doesn’t necessarily follow; he could be a spy and be cheating on her, but leave that aside.) What’s odd about this scene is that she’s so relieved that he hasn’t been cheating on her that she’s not at all upset that he works for the CIA. In other words, she would much rather he work for an organization that practices assassination, rendition, and torture than be having an affair. I don’t know what’s more disconcerting about this scene: that the writers think most women would realistically react that way or that they think working for the CIA absolves one of any wrongdoing.
I’ve defended an affirmative answer on this blog in the past. Of course, there’s a lot to unpack in the question. What does “useful” mean in this context? Does it mean “Will it help me get a job?” If so, what kind of job? An academic job? A non-academic job? When we ask if a philosophy degree specifically is useful, are we asking if the content of a philosophy degree is useful or if the skills acquired during its completion are useful? We should be clear about the difference.
I would still say that a philosophy degree is useful, but with a few caveats. The content of philosophy, for example, isn’t particularly useful with respect to finding a non-academic job. The fact is, nobody outside academia is going to pay you for knowing a lot about German philosophy (and even within academia, they probably won’t pay you much). However, several sectors outside academia will pay you for thinking critically, synthesizing information, and making abstract ideas concrete. In other words, the skills are useful; the content, not so much.
Of course, one can acquire those same skills majoring in subjects other than philosophy. In fact, depending on the person, other disciplines may be better at inculcating those skills. For example, if you’re bored by the content of philosophy, you’ll probably be less receptive to learning the skills. I learned from my teaching experience that it’s actually quite difficult to teach subject-independent critical thinking. I actually found it easier to teach critical thinking in my intro to philosophy class than I did in the generic critical thinking class. Perhaps this is because the former used specifically philosophical examples whereas the latter tried to be too general. This leads me to question how much any subject fosters generic ‘critical thinking skills’ that will have broad applicability in the workplace. I don’t think philosophy enjoys any unique advantage in this regard.
Yes, philosophy majors do well on tests like the GRE, GMAT, and LSAT. But there’s not much evidence that studying philosophy causes students to do well on those tests as opposed to merely attracting students who are inclined to do well on those tests in the first place. It’s the old treatment effect versus selection effect. The evidence offered simply doesn’t support the causal claim. Also, my anecdotal evidence from teaching, for what it’s worth, indicates that students who do major in philosophy are smart to begin with and probably would be successful with or without a philosophy major.
All of this is to say that philosophy departments tend to overstate the usefulness of a philosophy degree. I understand that they need to sell philosophy to students, but if a student isn’t already smart, studious, and disciplined, then it’s not likely that studying philosophy is going to be very useful. I suspect that professional philosophers know this already — they’re too smart for the ‘correlation = causation’ fallacy — but they’re driven to justify their existence to the university administrators who control the purse strings. So they overstate their case.
These practices raise significant questions about the ethics of advertising philosophy as developing critical thinking skills or leading to higher wages. Marketing the degree in this way may lead to false expectations. At the very least, more empirical research would need to be done and the claims should at least be qualified. I don’t expect philosophy departments to be held responsible for every mistaken inference a student might make (caveat emptor, after all), but since philosophers value clarity they should be especially wary of making claims that could be easily misinterpreted by the general public.