The Illogic of Discrimination

The Illogic of Discrimination

Discrimination is a problem. It is a blight on society and a blemish on personal conduct. During the last one hundred or so years, the fight against discrimination has played an increasingly important role in political discourse, particularly on the left: against racism, sexism, homophobia, transphobia, and white privilege. Nowadays this discourse has its own name: identity politics. We both recognize and repudiate more kinds of discrimination than ever before.

This is as it should be. Undeniably many forms of discrimination exist; and discrimination—depriving people of rights and privileges without legitimate reason—is the enemy of equality and justice. If we are to create a more fair and open society, we must fight to reduce prejudice and privilege as much as we can. Many people are already doing this, of course; and identity politics is rightly here to stay.

And yet, admirable as the goals of identity politics are, I am often dissatisfied with its discourse. Specifically, I think we are often not clear about why certain statements or ideas are discriminatory. Often we treat certain statements as prejudiced because they offend people. I have frequently heard arguments of this form: “As a member of group X, I am offended by Y; therefore Y is discriminatory to group X.”

This argument—the Argument from Offended Feelings, as I’ll call it—is unsatisfactory. First, it is fallacious because it generalizes improperly. It is the same error as someone commits when they conclude, from eating bad sushi once, that all sushi is bad: the argument takes one case and applies it to a whole class of things.

Even if many people, all belonging to the same group, find a certain remark offensive, it still is invalid to conclude that the remark is intrinsically discriminatory: this only shows that many people think it is. Even the majority may be wrong—such as the many people who believe that the word “niggardly” comes from the racial slur and is thus racist, while in reality the word has no etymological or historical connection with the racial slur (it comes from Middle English).

Subjective emotional responses should not be given an authoritative place in the question of prejudice. Emotions are not windows into the truth. They are of no epistemological value. Even if everybody in the world felt afraid of me, it would not make me dangerous. Likewise, emotional reactions are not enough to show that a remark is discriminatory. To do that, it must be shown how the remark incorrectly assumes, asserts, or implies something about a certain group.

In other words, we must keep constantly in mind the difference between a statement being discriminatory or merely offensive. Discrimination is wrong because it leads to unjust actions; offending people, on the other hand, is not intrinsically wrong. Brave activists, fighting for a good cause, often offend many.

Thus it is desirable to have logical tests, rather than just emotional responses, for distinguishing discriminatory responses. I hope to provide a few tools in this direction. But before that, here are some practical reasons for preferring logical to emotional criteria.

Placing emotions, especially shared emotions, at the center of any moral judgment makes a community prone to fits of mob justice. If the shared feelings of outrage, horror, or disgust of a group is sufficient to condemn somebody, then we have the judicial equivalent of a witch-hunt: the evidence for the accusation is not properly examined, and the criteria that separate good evidence from bad are ignored.

Another practical disadvantage of giving emotional reactions a privileged place in judgments of discrimination is that it can easily backfire. If enough people say that they are not offended, or if emotional reactions vary from outrage to humor to ambivalence, then the community cannot come to a consensus about whether any remark or action is discriminatory. Insofar as collective action requires consensus, this is an obvious limitation.

What is more, accusations of discrimination are extremely easy to deny if emotional reactions are the ultimate test. The offended parties can simply be dismissed as “over-sensitive” (a “snowflake,” more recently), which is a common rhetorical strategy among the right (and is sometimes used on the left, too). The wisest response to this rhetoric strategy, I believe, is not to re-affirm the validity of emotions in making judgments of discrimination—this leads you into the same trap—but to choose more objective criteria. Some set of non-emotional, objective criteria for determining whether an action is discriminatory is highly desirable, I think, since there is no possibility of a lasting consensus without it.

So if these emotional tests can backfire, what less slippery test can we use?

To me, discriminatory ideas—and the actions predicated on these ideas—are discriminatory precisely because they are based on a false picture of reality: they presuppose differences that do not exist, and mischaracterize or misunderstand the differences that do exist. This is important, because morally effective action of any kind requires a basic knowledge of the facts. A politician cannot provide for his constituents’ needs of she does not know what they are. A lifeguard cannot save a drowning boy if he was not paying attention to the water. Likewise, social policies and individual actions, if they are based on a false picture of human difference, will be discriminatory, even with the best intentions in the world.

I am not arguing that discrimination is wrong purely because of this factual deficiency. Indeed, if I falsely think that all Hungarians love bowties, although this idea is incorrect and therefore discriminatory, this will likely not make me do anything immoral. Thus it is possible, in theory at least, to hold discriminatory views and yet be a perfectly ethical person. It is therefore necessary to distinguish between whether a statement is offensive (it upsets people), discriminatory (it is factually wrong about a group of people), and immoral (it harms people and causes injustice). The three categories do not necessary overlap, in theory or in practice.

It is obvious that, in our society, discrimination is usually far more nefarious than believing that Hungarians love bowties. Discrimination harms people, sometimes kills people; and discrimination causes systematic injustice. My argument is that to prove any policy or idea is intrinsically discriminatory requires proving that it asserts something empirically false.

Examples are depressingly numerous. Legal segregation in the United States was based on the premise that there existed a fundamental difference between blacks and whites, a difference that justified different treatment and physical separation. Similarly, Aristotle argued that slavery was legitimate because some people were born slaves: they were intrinsically slavish. Now, both of these ideas are empirically false. They assert things about reality that are either meaningless, untestable, or contrary to the evidence; and so any actions predicated on these ideas will be discriminatory—and horrific.

These are not special cases. European antisemitism has always incorporated myths and lies about the Jewish people: tales of Jewish murders of Christian children, of widespread Jewish conspiracies, and so on. Laws barring women from voting and rules preventing women from attending universities were based on absurd notions about women’s intelligence and emotional stability. Name any group which has faced discrimination, and you can find a corresponding myth that attempts to justify the prejudice. Name any group which has dominated, and you can find an untruth to justify their “superiority.”

In our quest to determine whether a remark is discriminatory, it is worth taking a look, first of all, at the social categories themselves. Even superficial investigation will reveal that many of our social categories are close to useless, scientifically speaking. Our understanding of race in the United States, for example, gives an entirely warped picture of human difference. Specifically, the terms “white” and “black” have shifted in meaning and extent over time, and in any case were never based on empirical investigation.

Historically speaking, our notion of what it means to be “white” used to be far more exclusive than it is now, previously excluding Jews and Eastern Europeans. Likewise, as biological anthropologists never tire of telling us, there is more genetic variation in the continent of Africa than the rest of the world. Our notions of “white” and “black” simply fail to do justice to the extent of genetic variation and intermixture that exists in the United States. We categorize people into a useless binary using crude notions of skin color. Any policy based on supposed innate, universal differences between “black” and “white” will therefore be based on a myth. Similar criticisms can be made of our common notions of gender and sexual orientation

Putting aside the sloppy categories, discrimination may be based on bad statistics and bad logic. Here are the three errors I think are more common in discriminatory remarks.

The first is to generalize improperly: to erroneously attribute a characteristic to a group. This type of error is exemplified by Randy Newman’s song “Short People,” when he says short people “go around tellin’ great big lies.” I strongly suspect that it is untrue that short people tell, on average, more lies than taller people, which makes this an improper generalization.

This is a silly example, of course. And it is worth pointing out that some generalizations about group differences are perfectly legitimate. It is true, for example, that Spanish people eat more paella than Japanese people. When done properly, generalizations about people are useful and often necessary. The problem is that we are often poor generalizers. We jump to conclusions—using the small sample of our experience to justify sweeping pronouncements—and we are apt to give disproportionate weight to conspicuous examples, thus skewing our judgments.

Our poor generalizations are, all too often, mixed up with more nefarious prejudices. Trump exemplified this when he tweeted a table of statistics of crime rates back in November of 2015. The statistics are ludicrously wrong in every respect. Notably, they claim that more whites are killed by blacks than by other whites, when in reality more whites are killed by other whites. (This shouldn’t be a surprise, since most murders take place within the same community; and since people of the same race tend to live in the same community, most murders are intra-racial.)

The second type of error involved in prejudice is to make conclusions about an individual based on their group. This is a mistake even when the generalizations about the group are accurate. Even if it were statistically true, for example, that short people lied more often than tall people, it would still be invalid to assume that any particular short person is a liar.

The logical mistake is obvious: even if a group has certain characteristics on average, that does not mean that every individual will have these characteristics. On average, Spaniards are shorter than me; but that does not mean that I can safely assume any Spaniard will be shorter than I am. On average, most drivers are looking out for pedestrians; but that doesn’t make I can safely run into the road.

Of course, almost nobody, if they had a half-second to reflect, would make the mistake of believing every single member of a given group had whatever quality. More often, people are just wildly mistaken about how likely a certain person is to have any given quality—most often, we greatly overestimate.

It is statistically true, for example, that Asian Americans tend to do well on standardized math and science exams. But this generalization, which is valid, does not mean you can safely ask any Asian American friend for help on your science homework. Even though Asian Americans do well in these subjects as a group, you should still expect to see many individuals who are average or below average. This is basic statistics—and yet this error accounts for a huge amount of racist and sexist remarks.

Aside from falsely assuming that every member of a group will be characterized by a generalization, the second error also results from forgetting intersectionality: the fact that any individual is inevitably a member of many, intersecting demographic groups. Income bracket, race, gender, sexual orientation, education, religion, and a host of other categories will apply to any individual. Predicting how the generalizations associated with these categories—which may often make contradictory predictions—will play out in any individual case, is close to impossible.

This is not even to mention all of the manifold influences on behavior that are not included in these demographic categories. Indeed, it is these irreducibly unique experiences, and our unique genetic makeup, that make us individuals in the first place. Humans are not just members of a group, nor even members of many different, overlapping groups: each person is sui generis.

In sum, humans are complicated—the most complicated things in the universe, so far as we know—and making predictions about individual people using statistical generalizations of broad, sometimes hazily defined categories, is hazardous at best, and often foolish. Moving from the specific to the general is fairly unproblematic; we can collect statistics and use averages and medians to analyze sets of data. But moving from the general to the specific is far more troublesome.

The third error is to assert a causal relationship where we only have evidence for correlation. Even if a generalization is valid, and even if an individual fits into this generalization, it is still not valid to conclude that an individual has a certain quality because they belong to a certain group.

Let me be more concrete. As we have seen, it is a valid generalization to say that Asian Americans do well on math and science exams. Now imagine that your friend John is Asian American, and also an excellent student in these subjects. Even in this case, to say that John is good at math “because he’s Asian” would still be illogical (and therefore racist). Correlation does not show causation.

First of all, it may not be known why Asian Americans tend to do better. And even if a general explanation is found—for example, that academic achievement is culturally prized and thus families put pressure on children to succeed—this explanation may not apply in your friend John’s case. Maybe John’s family does not pressure him to study and he just has a knack for science. (This would be the 2nd error again.)

Further, even if this general explanation did apply in your friend John’s case (his family pressures him to study for cultural reasons), the correct explanation for him being a good student still wouldn’t be “because he’s Asian,” but would be something more like “because academic achievement is culturally prized in many Asian communities.” In other words, the cause would be ultimately cultural, and not racial. (I mean that this causation would apply equally to somebody of European heritage being raised in an Asian culture, a person who would be considered “white” in the United States. The distinction between cultural and biological explanations is extremely important, since one posits only temporary, environmental differences while the other posits permanent, innate differences.)

In practice, these three errors are often run together. An excellent example of this is from Donald Trump’s notorious campaign announcement: “When Mexico sends its people, they’re not sending their best. … They’re sending people that have lots of problems, and they’re bringing those problems with us [sic.]. They’re bringing drugs. They’re bringing crime. They’re rapists.”

Putting aside the silly notion of Mexico “sending” its people (they come of their own accord), the statement is discriminatory because it generalizes falsely. Trump’s words give the impression that a huge portion, maybe even the majority, of Mexican immigrants are criminals of some kind—and this isn’t true. (In reality, the statistics for undocumented immigrants can put native citizens to shame, as demonstrated here.)

To be fair, Trump avoids the second error (concluding from the general to the particular), by admitting that he assumes some are “good people.” But he then falls into the third error by treating people as inherently criminal—the immigrants simply “are” criminals, as if they were born that way. Even if it were proven that Mexican immigrants had significantly higher crime rates, it would still be an open question why this was so. The explanation might have nothing to do with their cultural background or any previous history of criminality. It might be found, for example, that poverty and police harassment significantly increased criminality; and in this case the government would share some of the responsibility.

Donald Trump committed the second error in his infamous comments about Judge Gonzalo Curiel, who was overseeing a fraud lawsuit against Trump University. Trump attributed the Curiel’s (perceived) hostility to his Mexican heritage. Trump committed a simple error of fact when he called Curiel “Mexican” (Curiel was born in Indiana), and then committed a logical fallacy when he concluded that the judge’s actions and attitudes were due to his being of Mexican heritage. Even if it were true (as I suspect it is), that Mexican-Americans, on the whole, don’t like Trump, it still doesn’t follow that any given individual Mexican-American doesn’t like him (2nd error); and even if Curiel did dislike Trump, it wouldn’t follow that it was because of hiss heritage (3rd error).

These errors and mistakes are just my attempt at an outline of how discrimination can be criticized on logical, empirical grounds. Certainly there is much more to be said in this direction. What I hoped to show in this piece was that this strategy is viable, and ultimately more desirable than using emotional reactions as a test for prejudice.

Discourse, agreement, and cooperation are impossible when people are guided by emotional reactions. We tend to react emotionally along the lines of factions—indeed, our emotional reactions are conditioned by our social circumstances—so privileging emotional reactions will only exacerbate disagreements, not help to bridge them. In any case, besides the practical disadvantages—which are debatable—I think emotional reactions are not reliable windows into the truth. Basing reactions, judgments, and criticisms on sound reasoning and dependable information is always a better long-term strategy.

For one, this view of discrimination provides an additional explanation for why prejudice is so widespread and difficult to eradicate. We humans have inherited brains that are constantly trying to understand our world in order to navigate it more efficiently. Sometimes our brains make mistakes because we generalize too eagerly from limited information (1st error), or because we hope to fit everything into the same familiar pattern (2nd error), or because we are searching for causes of the way things work (3rd error).

So the universality of prejudice can be partially explained, I think, by the need to explain the social world. And once certain ideas become ingrained in somebody’s worldview, it can be difficult to change their mind without undermining their sense of reality or even their sense of identity. This is one reason why prejudices can be so durable (not to mention that certain prejudices justify convenient, if morally questionable, behaviors, as well as signal a person’s allegiance to a certain group).

I should say that I do not think that discrimination is simply the result of observational or logical error. We absorb prejudices from our cultural environment; and these prejudices are often associated with divisive hatreds and social tension. But even these prejudices absorbed by the environment—that group x is lazy, that group y is violent, that group z is unreliable—always inevitably incorporate some misconception of the social world. Discrimination is not just a behavior. Mistaken beliefs are involved—sometimes obliquely, to be sure—with any prejudice.

This view of prejudice—as caused, at least in part, by an incorrect picture of the world, rather than pure moral depravity—may also allow us to combat it more effectively. It is easy to imagine a person with an essentially sound sense of morality who nevertheless perpetrates harmful discrimination because of prejudices absorbed from her community. Treating such a person as a monster will likely produce no change of perspective; people are not liable to listen when they’re being condemned. Focusing on somebody’s misconceptions may allow for a less adversarial, and perhaps more effective, way of combating prejudice. And this is not to mention the obvious fact that somebody cannot be morally condemned for something they cannot help; and we cannot help if we’re born into a community that instructs its members in discrimination.

Even if this view does not adequately explain discrimination, and even if it does not provide a more effective tool in eliminating it, this view does at least orient our gaze towards the substance rather than the symptoms of discrimination.

Because of their visibility, we tend to focus on the trappings of prejudice—racial slurs, the whitewashed casts of movies, the use of pronouns, and so on—instead of the real meat of it: the systematic discrimination—economic, political, judicial, and social—that is founded on an incorrect picture of the world. Signs and symptoms of prejudice are undeniably important; but eliminating them will not fix the essential problem: that we see differences that aren’t really there, we assume differences without having evidence to justify these assumptions, and we misunderstand the nature and extent of the differences that really do exist.

Advertisements

On the Quarter-Life Crisis

On the Quarter-Life Crisis

From College to Chaos

In the modern world, there is a certain existential dread that comes with being in your twenties. Certainly this is true in my case.

This dread creeps up on you in the years of struggle, confusion, and setbacks that many encounter after graduating university. There are many reasons for this.

One is that college simply does not prepare you for the so-called “real world.” In college, you know what you have to do, more or less. Every class has a syllabus. Every major has a list of required courses. You know your GPA and how many credits you need to graduate.

College lacks some of that uncertainty and ambiguity that life—particularly life as a young adult—so abundantly possesses. There is a clear direction forward and it’s already been charted out for you. You know where you’re going and what you have to do to get there.

Another big difference is that college life is fairly egalitarian. Somebody might have a cuter boyfriend, a higher GPA, a richer dad, or whatever, but in the end you’re all just students. As a consequence, envy doesn’t have very much scope. Not that college students don’t get envious, but there are far fewer things, and less serious things, to get envious about. You don’t scroll through your newsfeed and see friends bragging about promotions, proposals, babies, and paid vacations.

There’s one more big difference: nothing you do in college is potentially a big commitment. The biggest commitment you have to make is what to major in; and even that is only a commitment for four years or less. Your classes only last a few months, so you don’t have to care much about professors. You are constantly surrounded by people your age, so friendships and relationships are easy to come by.

Then you graduate, and you’re thrown into something else entirely. Big words like Career and Marriage and Adulthood start looming large. You start asking yourself questions. When you take a job, you ask yourself “Can I imagine doing this for the rest of my life?” When you date somebody, you say to yourself “Can I imagine living with this person the rest of my life?” If you move to another city, you wonder “Could I make a home here?”

You don’t see adults as strange, foreign creatures anymore, but as samples of what you might become. You are expected, explicitly and implicitly, to become an adult yourself. But how? And what type of adult? You ask yourself, “What do I really want?” Yet the more you think about what you want, the less certain it becomes. It’s easy to like something for a day, a week, a month. But for the rest of your life? How are you supposed to commit yourself for such an indefinitely long amount of time?

Suddenly your life is not just potential anymore. Very soon, it will become actual. Instead of having a future identity, you will have a present identity. This is really frightening. When your identity is only potential, it can take on many different forms in your imagination. But when your identity is present and actual, you lose the deliciousness of endless possibility. You are narrowed down to one thing. Now you have to choose what that thing will be. But it’s such a hard choice, and the clock keeps ticking. You feel like you’re running out of time. What will you become?

The American Dream

A few weeks ago I was taking a long walk, and my route took me through a wealthy suburban neighborhood. Big, stately houses with spacious driveways, filled with expensive cars, surrounded me on all sides. The gardens were immaculate; the houses had big lawns with plenty of trees, giving them privacy from their neighbors. And they had a wonderful view, too, since the neighborhood was right on the Hudson River.

I was walking along, and I suddenly realized that this is what I’m supposed to want. This is the American Dream, right? A suburban house, a big lawn, a few cars and a few kids.

For years I’d been torturing myself with the idea that I would never achieve success. Now that I was looking at success, what did it make me feel? Not much. In fact, I didn’t envy the people in those houses. It’s not that I pitied them or despised them. I just couldn’t imagine that their houses and cars and their view of the river, wonderful as it all was, made them appreciably happier than people without those things.

So I asked myself, “Do I really want all these things? A house? A wife? Kids?” In that moment, the answer seemed to be “No, I don’t want any of that stuff. I want my freedom.”

Yet nearly everybody wants this stuff—eventually. And I have a natural inclination to give people some credit. I don’t think folks are mindless cultural automatons who simply aspire to things because that’s how they’ve been taught. I don’t think everybody who wants conventional success is a phony or a sell-out.

Overwhelmingly, people genuinely want these things when they reach a certain point in their lives. I’m pretty certain I will want them, too, and maybe soon. The thing that feels uncomfortable is that, in the mean time, since I expect to want these things, I feel an obligation to work towards them, even though they don’t interest me now. Isn’t that funny?

Equations of Happiness

One of the reasons that these questions can fill us with dread is that we absorb messages from society about the definition of happiness.

One of these messages is about our career. Ever since I was young, I’d been told “Follow your passion!” or “Follow your dreams!” The general idea is that, if you make your passion into your career, you will be supremely happy, since you’ll get paid for what you like doing. Indeed, the phrase “Get paid for what you like doing” sometimes seems like a pretty decent definition of happiness.

Careers aren’t the only thing we learn to identify with happiness. How many stories, novels, and movies end with the boy getting the girl, and the couple living happily ever after? In our culture, we have veritable a mythology of love. Finding “the one,” finding your “perfect match,” and in the process finding the solution to life—this is a story told over and over again, until we subconsciously believe that romantic love is the essential ingredient to life.

Work and Love are two of the biggest, but there are so many other things that we learn to identify with happiness. Having a perfect body, being beautiful and fit. Beating others in competitions, winning contests, achieving things. Being cool and popular, getting accepted into a group. Avoiding conflict, pleasing others. Having the right opinions, knowing the truth. This list only scratches the surface.

In so many big and little ways, in person and in our media, we equate these things with happiness and self-worth. And when we even suspect that we don’t have them—that we might not be successful, popular, right, loved, or whatever—then we feel a sickening sense of groundlessness, and we struggle to put that old familiar ground beneath our feet.

Think of all the ways that you measure yourself against certain, self-imposed standards. Think of all the times you chastise yourself for falling short, or judge yourself harshly for failing to fit this self-image you’ve built up, or fallen into a dark hole when something didn’t go right. Think about all the things you equate with happiness.

Now, think about how you judge your good friends. Do you look down on them if they aren’t successful? Do you think they’re worthless if they didn’t find “the one”? Do you spend much time judging them for their attractiveness, popularity, or coolness? Do you like them less if they lose or fail? If someone else rejects them, do you feel more prone to reject them too?

I’d wager the answer to all these questions is “No.” So why do we treat ourselves this way?

Is it the Money?

There’s no question that the quarter-life crisis is partly a product of privilege. It takes a certain amount of affluence to agonize over what will be my “calling” or who will be “the one.” Lots of people have to pay the rent; and their work and romantic options are shaped by that necessity. When you’re struggling to keep your head above water, your anxiety is more practical than existential. This thought makes me feel guilty for complaining.

But affluence is only part of the it. The other is expectation. Many of us graduated full of hope and optimism, and found ourselves in a limping economy, dragging behind us a big weight of college debt. Just when we were supposed to be hitting the ground running, we were struggling to find jobs and worrying how to pay for the degree we just earned. And since many of us had been encouraged—follow your dreams!—to study interesting but financially impractical things, our expensive degrees seemed to hurt us more than help us.

This led to a lot of bitterness. My generation had been told that we could be anything we wanted. Just do the thing you’re passionate about, and everything will follow. That was the advice. But when we graduated, it seemed that we’d been conned into paying thousands of dollars for a worthless piece of paper. This led to a lot of anger and disenchantment among twenty-somethings, which is why, I think, so many of us gravitated towards Bernie Sanders. Our parents had a car, a house, and raised a family, while we were living at home, working at Starbucks, and using our paychecks to pay for our anthropology degree.

For a long while I used my sense of injustice to justify my angst. I had the persistent feeling that it wasn’t fair, and that went back and forth between being angry at myself or the world.

Nevertheless, I think that, for most middle class people, financial factors don’t really explain the widespread phenomenon of the quarter-life crisis.

I realized this when I started my first decent-paying job. I wasn’t making a lot of money, you understand, but I was making more than enough for everything I wanted. The result? I felt even worse. When I took care of the money problem, the full weight of the existential crisis hit me. I kept asking myself, “Can I really imagine doing this forever?” I thought about my job, and felt empty. And this feeling of emptiness really distressed me, because I thought my job was supposed to be exciting and fulfilling.

This was a valuable lesson for me. I expected the money to calm me and make me happy, and yet I only felt worse and worse. Clearly, the problem was with my mindset and not my circumstances. How to fix it?

From Crisis to Contentment

Well, I’m not out of it yet. But I have made some progress.

First, I think it’s important to take it easy on ourselves. We are so prone to hold ourselves up to certain self-imposed standards, or some fixed idea of who we are. We also like to compare ourselves with others, feeling superior when we’re doing “better,” and worthless when we’re doing “worse.” Take it easy with all that. All of these standards are unreal. You tell yourself you’re “supposed” to be doing such and such, making this much money, and engaged at a whatever age. All this is baloney. You aren’t “supposed” to be or to do anything.

Bertrand Russell said: “At twenty men think that life will be over at thirty. I, at the age of fifty-eight, can no longer take that view.” He’s right: There is nothing magical about the age of thirty. There is no age you pass when you don’t have to worry about money, about your boss, about your partner, about your health. There will always be something to worry about. There will always be unexpected curveballs that upset your plans. Don’t struggle to escape the post-college chaos; try to accept it as normal.

Don’t equate your happiness or your self-worth with something external. You are not your job, your hobby, your paycheck, your body, your friend group, or your relationship. You aren’t a collection of accomplishments or a Facebook profile. You’re a person, and you have worth just because you’re a person, pure and simple. Everything else is incidental.

If you want to be rich, famous, loved, successful—that’s fine, but that won’t make you any better than other people. It might not even make you happier. Don’t worry so much about putting ground under your feet. Don’t fret about establishing your identity. You will always be changing. Life will always be throwing problems at you, and sometimes things will go wrong. Try to get comfortable with the impermanence of things.

Don’t look for the “meaning” of life. Don’t look for “the answer.” Look for meaningful experiences of being alive. Appreciate those moments when you feel totally connected with life, and try to seek those moments out. Realize that life is just a collection of moments, and not a novel with a beginning, middle, and end.

These moments are what bring you happiness, not the story you tell about yourself. So you don’t have to feel existential dread about these big Adult Questions of Love and Work. It’s important to find a good partner and a good job. These things are very nice, but they’re not what give your life value or define you or make life worth living. Treat them as practical problems, not existential ones. Like any practical problem, they might not have a perfect solution, and you might fail—which is frustrating. But failure won’t make you worthless, just like success won’t legitimize your life.

One last thing. Stop caring about what other people think. Who cares? What do they know? Be a friend to yourself, be loyal to yourself. Every time to judge yourself, you betray yourself. In a thousand little ways throughout the day, we reject our experiences and our world. Don’t reject. Accept. Stand steadfastly by yourself as you ride down the steady stream of thoughts, feelings, flavors, colors, sounds, mistakes, accidents, failures, successes, and petty frustrations that make up life as we know it.

On Egotism and Education

On Egotism and Education

A while ago a friend asked me an interesting question.

As usual, I was engrossed in some rambling rant about a book I was reading—no doubt enlarging upon the author’s marvelous intellect (and, by association, my own). My poor friend, who is by now used to this sort of thing, suddenly asked me:

“Do you really think reading all these books has made you a better person?”

“Well, yeah…” I stuttered. “I think so…”

An awkward silence took over. I could truthfully say that reading had improved my mind, but that wasn’t the question. Was I better? Was I more wise, more moral, calmer, braver, kinder? Had reading made me a more sympathetic friend, a more caring partner? I didn’t want to admit it, but the answer seemed to be no.

This wasn’t an easy thing to face up to. My reading was a big part of my ego. I was immensely proud, indeed even arrogant, about all the big books I’d gotten through. Self-study had strengthened a sense of superiority.

But now I was confronted with the fact that, however much more knowledgeable and clever I had become, I had no claim to superiority. In fact—although I hated even to consider the possibility—reading could have made me worse in some ways, by giving me a justification for being arrogant.

This phenomenon is by no means confined to myself. Arrogance, condescension, and pretentiousness are ubiquitous qualities in intellectual circles. I know this both at first- and second-hand. While lip-service is often given to humility, the intellectual world is rife with egotism. And often I find that the more well-educated someone is, the more likely they are to assume a condescending tone.

This is the same condescending tone that I sometimes found myself using in conversations with friends. But condescension is of course more than a tone; it is an attitude towards oneself and the world. And this attitude can be fostered and reinforced by habits you pick up through intellectual activity.

One of these habits is argumentativeness for me, most closely connected with reading philosophy. Philosophy is, among other things, the art of argument; and good philosophers are able to bring to their arguments a level of rigor, clarity, and precision that is truly impressive. The irony here is that there is far more disagreement in philosophy than in any other discipline. To be fair, this is largely due to the abstract, mysterious, and often paradoxical nature of the questions they investigate—which resist even the most thorough analysis.

Nevertheless, given that their professional success depends upon putting forward the strongest argument to a given problem, philosophers devote a lot of time to picking apart the theories and ideas of their competitors. Indeed, the demolition of a rival point of view can assume supreme importance. A good example of this is Gilbert Ryle’s Concept of Mind—a brilliant and valuable book, but one that is mainly devoted to debunking an old theory rather than putting forward a new one.

This sort of thing isn’t confined to philosophy, of course. I have met academics in many disciplines whose explicit goal is to quash another theory rather than to provide a new one. I can sympathize with this, since proving an opponent wrong can feel immensely powerful. To find a logical fallacy, an unwarranted assumption, an ambiguous term, an incorrect generalization in a competitor’s work, and then to focus all your firepower on this structural weakness until the entire argument comes tumbling down—it’s really satisfying. Intellectual arguments can have all the thrill of combat, with none of the safety hazards.

But to steal a phrase from the historian Richard Fletcher, disputes of this kind usually generate more heat than light. Disproving a rival claim is not the same thing as proving your own claim. And when priority is given to finding the weaknesses rather than the strengths of competing theories, the result is bickering rather than the pursuit of truth.

To speak from my own experience, in the past I’ve gotten to the point where I considered it a sign of weakness to agree with somebody. Endorsing someone else’s conclusions without reservations or qualifications was just spineless. And to fail to find the flaws in another thinker’s argument—or, worse yet, to put forward your own flawed argument—was simply mortifying for me, a personal failing. Needless to say this mentality is not desirable or productive, either personally or intellectually.

Besides being argumentative, another condescending attitude that intellectual work can reinforce is name-dropping.

In any intellectual field, certain thinkers reign supreme. Their theories, books, and even their names carry a certain amount of authority; and this authority can be commandeered by secondary figures through name-dropping. This is more than simply repeating a famous person’s name (although that’s common); it involves positioning oneself as an authority on that person’s work.

Two books I read recently—Mortimer Adler’s How to Read a Book, and Harold Bloom’s The Western Canon—are prime examples of this. Both authors wield the names of famous authors like weapons. Shakespeare, Plato, and Newton are bandied about, used to cudgel enemies and to cow readers into submission. References to famous thinkers and writers can even be used as substitutes for real argument. This is the infamous argument from authority, a fallacy easy to spot when explicit, but much harder when used in the hands of a skilled name-dropper.

I have certainly been guilty of this. Even while I was still an undergraduate, I realized that big names have big power. If I even mentioned the names of Dante or Milton, Galileo or Darwin, Hume or Kant, I instantly gained intellectual clout. And if I found a way to connect the topic under discussion to any famous thinker’s ideas—even if that connection was tenuous and forced—it gave my opinions weight and made me seem more “serious.” Of course I wasn’t doing this intentionally to be condescending or lazy. At the time, I thought that name-dropping was the mark of a dedicated student, and perhaps to a certain extent it is. But there is a difference between appropriately citing an authority’s work and using their work to intimidate people.

There is a third way that intellectual work can lead to condescending attitudes, and that is, for lack of a better term, political posturing. This particular attitude isn’t very tempting for me, since I am by nature not very political, but this habit of mind is extremely common nowadays.

By political posturing I mean several related things. Most broadly, I mean when someone feels that people (himself included) must hold certain beliefs in order to be acceptable. These can be political or social beliefs, but they can also be more abstract, theoretical beliefs. In any group—be it a university department, a political party, or just a bunch of friends—a certain amount of groupthink is always a risk. Certain attitudes and opinions become associated with the group, and they become a marker of identity. In intellectual life this is a special hazard because proclaiming fashionable and admirable opinions can replace the pursuit of truth as the criterion of acceptability.

At its most extreme, this kind of political posturing can lead to a kind of gang mentality, wherein disagreement is seen as evil and all dissent must be punished with ostracism and mob justice. This can be observed in the Twitter shame campaigns of recent years, but a similar thing happens in intellectual circles.

During my brief time in graduate school, I felt an intense and ceaseless pressure to espouse leftist opinions. This seemed to be ubiquitous: students and professors sparred with one another, in person and in print, by trying to prove that their rival is not genuinely right-thinking (or “left-thinking” as the case may be). Certain thinkers could not be seriously discussed, much less endorsed, because their works had intolerable political ramifications. Contrariwise, questioning the conclusions of properly left-thinking people could leave you vulnerable to accusations about your fidelity to social justice or economic equality.

But political posturing has a milder form: know-betterism. Know-betterism is political posturing without the moral outrage, and its victims are smug rather than indignant.

The book Language, Truth, and Logic by A.J. Ayer comes to mind, wherein the young philosopher, still in his mid-twenties, simply dismisses the work of Plato, Aristotle, Spinoza, Kant and others as hogwash, because it doesn’t fit into his logical positivist framework.

Indeed, logical positivism is an excellent example of the pernicious effects of know-betterism. In retrospect, it seems incredible that so many brilliant people endorsed it, because logical positivism has crippling and obvious flaws. But not only did people believe it, but they thought it was “The Answer”—the solution to every philosophical problem—and considered anyone who thought otherwise a crank or a fool, somebody who couldn’t see the obvious. This is the danger of groupthink: when everyone “in the know” believes something, it can seem obviously right, regardless of the strength of the ideas.

The last condescending attitude I want to mention is rightness—the obsession with being right. Now of course there’s nothing wrong with being right. Getting nearer to the truth is the goal of all honest intellectual work. But to be overly preoccupied with being right is, I think, both an intellectual and a personal shortcoming.

As far as I know, the only area of knowledge in which real certainty is possible is mathematics. The rest of life is riddled with uncertainty. Every scientific theory might, and probably will, be overturned by a better theory. Every historical treatise is open to revision when new evidence, priorities, and perspectives arise. Philosophical positions are notoriously difficult to prove, and new refinements are always around the corner. And despite the best efforts of the social sciences, the human animal remains a perpetually surprising mystery.

To me, this uncertainty in our knowledge means that you must always be open to the possibility that you are wrong. The feeling of certainty is just that—a feeling. Our most unshakeable beliefs are always open to refutation. But when you have read widely on a topic, studied it deeply, thought it through thoroughly, it gets more and more difficult to believe that you are possibly in error. Because so much effort, thought, and time has gone into a conclusion, it can be personally devastating to think that you are mistaken.

This is human, and understandable, but can also clearly lead to egotism. For many thinkers, it becomes their goal in life to impose their conclusions upon the world. They struggle valiantly for the acceptance of their opinions, and grow resentful and bitter when people disagree with or, worse, ignore them. Every exchange thus becomes a struggle, pushing your views down another person’s throat.

This is not only an intellectual shortcoming—since it is highly unlikely that your views represent the whole truth—but it is also a personal shortcoming, since it makes you deaf to other people’s perspectives. When you are sure you’re right, you can’t listen to others. But everyone has their own truth. I don’t mean that every opinion is equally valid (since there are such things as uninformed opinions), but that every opinion is an expression, not only of thoughts, but of emotions, and emotions can’t be false.

If you want to have a conversation with somebody instead of giving them a lecture, you need to believe that they have something valuable to contribute, even if they are disagreeing with you. In my experience it is always better, personally and intellectually, to try to find some truth in what someone is saying than to search for what is untrue.

Lastly, being overly concerned with being right can make you intellectually timid. Going out on a limb, disagreeing with the crowd, putting forward your own idea—all this puts you at risk of being publicly wrong, and thus will be avoided out of fear. This is a shame. The greatest adventure you can take in life and thought is to be extravagantly wrong. Name any famous thinker, and you will be naming one of the most gloriously incorrect thinkers in history. Newton, Darwin, Einstein—every one of them has been wrong about something.

For a long time I have been the victim of all of these mentalities—argumentativeness, name-dropping, political posturing, know-betterism, and rightness—and to a certain extent, probably I always will. What makes them so easy to fall into is that they are positive attitudes taken to excess. It is admirable and good to subject claims to logical scrutiny, to read and cite major authorities, to advocate for causes you think are right, to respect the opinions of your peers and colleagues, and to prioritize getting to the truth.

But taken to excesses, these habits can lead to egotism. They certainly have with me. This is not a matter of simple vanity. Not only can egotism cut you off from real intimacy with other people, but it can lead to real unhappiness, too.

When you base your self-worth on beating other people in argument, being more well read than your peers, being on the morally right side, being in the know, being right and proving others wrong, then you put yourself at risk of having your self-worth undermined. To be refuted will be mortifying, to be questioned will be infuriating, to be contradicted will be intolerable. Simply put, such an attitude will put you at war with others, making you defensive and quick-tempered.

An image that springs to mind is of a giant castle with towering walls, a moat, and a drawbridge. On the inside of this castle, in the deepest chambers of the inner citadel, is your ego. The fortifications around your ego are your intellectual defenses—your skill in rhetoric, logic, argument, debate, and your impressive knowledge. All of these defense are necessary because your sense of self-worth depends on certain conditions: being perceived, and perceiving oneself, as clever, correct, well-educated, and morally admirable.

Intimacy is difficult in these circumstances. You let down the drawbridge for people you trust, and let them inside the walls. But you test people for a long time before you get to this point—making sure they appreciate your mind and respect your opinions—and even then, you don’t let them come into the inner citadel. You don’t let yourself be totally vulnerable, because even a passing remark can lead to crippling self-doubt when you equate your worth with your intellect.

Thus the fundamental mindset that leads to all of the bad habits described above is that being smart, right, or knowledgeable is the source of your worth as a human being. This is dangerous, because it means that you constantly have to reinforce the idea that you have all of these qualities in abundance. Life becomes then a constantly performance, an act for others and for yourself. And because a part of you knows that its an act—a voice you try to ignore—then it also leads to considerable bad faith.

As for the solution, I can only speak from my own experience. The trick, I’ve found, is to let down my guard. Every time you defend yourself you make yourself more fragile, because you tell yourself that there is a part of you that needs to be defended. When you let go of your anxieties about being wrong, being ignorant, or being rejected, your intellectual life will be enriched. You will find it easier to learn from others, to consider issues from multiple points of view, and to propose original solutions.

Thus I can say that reading has made me a better person, not because I think intellectual people are worth more than non-intellectuals, but because I realized that they aren’t.

On Justice

On Justice

A question that I often ask myself, especially now during the election season, is this: What makes a society just? Specifically, what are the criteria that determine whether a law is legitimate or a government is principled? I do not intend this question to be a legal question; whether something is constitutional is for me a secondary matter. The primary matter is this: What are the values on which a constitution is based that make it a worthy document?

Justice consists of the standards by which we determine whether a society is fair.

Justice is always a meta-standard—a standard applied to other standards that allows us to determine whether these other standards are worthy. For example, our standards of justice can be applied to our standards of ethics, to determine whether they need changing. But justice deals with other things besides crime and punishment. Economic justice deals with the fairness of the distribution of economic means; and social justice deals with the fairness of the treatment different demographic groups in the society. All of these, however, deal with the fairness of certain standards, whether they are the standards for determining whether someone should go to jail, make a lot of money, or be treated differently.

The crux of the matter, of course, is what we mean by fair. This is what philosophers, politicians, and virtually everyone else disagree about. The problem is that fairness often seems like a self-evident concept, when it reality it is far from that.

To start, what seems fair or unfair can depend very much on your situation. Let us say a lion managed to grab a gazelle and is about to eat it. From the gazelle’s point of view, this situation is monstrously unfair. The gazelle didn’t do anything to the lion, nor anything to anyone else, so why should it be the one being eaten? From the lion’s point of view, on the other hand, the situation is absolutely fair. The lion was born with certain dietary needs; it has to hunt and kill to survive; it picked the gazelle it found easiest to catch. What’s unfair about that? Should the lion starve itself?

Of course, lions and gazelles have no concept of fairness, so they lack this particular problem. But we have to deal with it. Finding a standard that can satisfy everyone in a given community is, I think, impossible. Every standard of fairness is bound to disappoint and embitter some. This is the basic tragedy of life. We can mourn this, but also learn from it. Since disappointment is unavoidable, and since perspective colors our notions of fair and unfair, it is clear that emotion alone cannot be the basis of a consistent standard of justice. We need something more objective, a clear set of principles that can be applied to any situation.

Let us start, as so many philosophers have before, with people in a so-called State of Nature. By this, I only mean people living without community of any sort—without rules, laws, or government, each person looking out for themself.

In this hypothetical (and wholly imaginary) situation, every person is maximally free. The only restrictions on people’s actions exist through the necessities of life. If they want to survive, the natural people must devote time and energy to finding food and building shelter for themselves. If they choose not to kill another person, it might be because they want that person’s help or because they are afraid of vengeance; but not because of moral scruples or fear of legal persecution. If a natural person finds a loner in the forest and decides to kill him and take his stuff, there might not be consequences. It is up to each individual what to do. Their every action is thus a calculated risk.

There are clearly some advantages to this hypothetical state of affairs. Most conspicuously, each person is a master of themself and does not have to listen to anybody. They can live where they like, how they like; they can eat, sleep, and play whenever they wish. But the disadvantages are also considerable. The main problem is lack of security. Without laws or police, you would always need to fear your neighbor; without a social safety net, you would always live at the mercy of the elements. It would be a life of maximal freedom and constant danger.

To repeat, I am not saying that this ‘Natural State’ ever existed; to the contrary, I do not think humans ever existed without communities, and I am only calling it ‘natural’ in keeping with the philosophical tradition. I am merely using this scenario to illustrate what a situation of maximal freedom would look like—wherein the only checks on a person’s actions are due to natural, and not social, constraints; wherein bare necessity, and not rules, custom, or law, are what guide life.

Now let us imagine what will happen if the people decide to get together and form a little community. This will clearly entail some changes. Most relevantly for my purposes, the people will have to start developing ways of organizing their actions. This is because, as they will soon discover, their unbridled desires will inevitably come into conflict.

If, for example, there are 10 apples and 10 people, it might be the case that each person wants all ten for themself. But when each of them tries to take all the apples, they will of course start arguing. If they are going to continue living together, they need to develop a solution.

Perhaps three of them fashion spears and shields, and use their weapons to impose their will on the other seven. Thus an oligarchy emerges, in which the three masters make the seven slaves gather apples for them, leaving the slaves only the cores for meager sustenance. The masters punish disobedience, hunt down deserters, and grow fat while the others wither away.

This is the classic Might Makes Right solution to the problem of human society. Thinkers since Plato have been grappling with it, and as long as humans live together it will be a constant temptation. Nietzsche would say that a society wherein the strong dominate the weak is the fairest society of all—fairness itself, he might say, since people are being divided due to the natural law of strength and not the artificial law of custom. The devotees of Realpolitik—Thucydides and Machiavelli, to name just two—find this dominance of the strong over the weak inevitable; and the Social Darwinians go further and find it desirable.

Admittedly, the use of force does solve the problem of conflict, albeit brutally. A powerful few, by violent means, can indeed reduce infighting enough to produce a stable society. But I think most people instinctively recoil from the solution as unjust. After all, being born strong, violent, and domineering does not make you any more deserving of power than being born weak, meek, and kindhearted.

But let us a take a closer look. In my society—namely, the modern West—we have attempted (in theory) to create a meritocracy, wherein the most intelligent and innovative people are able to become wealthy. But is a meritocracy of mind any more fair than a meritocracy of muscle? Is it any better to reward the clever than the cruel? Perhaps both systems are unfair, since they reward people based on an attribute that is not within their control. After all, you can’t choose whether you’re born a genius any more than whether you’re born a warrior. Yes, rewarding the bright involves less bruising and bloodshed than rewarding the belligerent; but is it, in the strict sense, any more fair?

I think so, for the following reason. In a meritocracy of intelligence (in theory, at least) everybody possesses the same rights; whereas in a society governed by Might Makes Right, the rulers have different rights than the ruled. A simple example will suffice. If you agree to play chess with your friend, probably you won’t complain of injustice if your friend easily defeats you. Both of you are playing by the same rules, and your friend, either through practice or natural talent, simply operated within these rules more effectively than you did. But if your friend took out a knife, held it to your throat, and declared himself the winner, this would be clearly unfair, because your friend gave himself an extra dimension of power that you lacked.

Admittedly, a true advocate of Might Makes Right can, with total consistency, insist that the situation is still fair, since you could have thought of using a knife, too. Your friend had an idea you didn’t; what’s unfair about that? Using this logic, any rule-breaking can be regarded as fair, since anybody could have thought of any breach of the rules. To repeat, ‘fairness’ is a slippery concept; and some purists would insist that the only real fairness exists in the law of survival of the fittest. After all, aren’t all the rules of society just artificial contrivances used by the weak to entrap the strong? Many have thought so.

All I can say is that the advocates of Nietzsche’s Will to Power and Social Darwinism do indeed have a self-consistent worldview that cannot be refuted without begging the question. Personally, I find a world governed by Might Makes Right immoral. All moral systems, in my view, must exist between equals and benefit each individual who takes part in it. Thus a society based on violent coercion cannot be moral—at least for me—since the members abide by the rules out of fear and not self-interest. Granted, a Social Darwinian or a Nietzchean would have a very different concept of morality so again my criticism is still begging the question. All I can plead, therefore, is that I find Might Makes Right distasteful; so while acknowledging its logical appeal I will focus on other solutions to the problems of human society.

Now let us return to the problem of the ten people and their ten apples. We have considered and rejected the possibility of violent coercion, though my rejection was personal rather than philosophical. (The thorny problem with ‘justice’ is that it deals in fairness; and how do you decide if your standard of fairness is fair? Obviously you cannot without using circular logic, and thus your personal preferences come into play. As you will see shortly, we’re about to encounter this same problem again.)

We shall consider another solution: The community comes together and decides that the apple supply must always be divided equally between its members. Thus with ten apples each person gets one apple; with five apples each person gets a half, and so on. This is communism, of course, and represents another classic response to the problem of human society. Instead of the brutal law of strength, we get the perfect law of equality.

There is a certain elegance and undeniable appeal to communism. After all, what could be more fair than everyone getting the same thing? But upon closer inspection, it is easy to see how a communist system can also be considered unjust.

An obvious consideration is that every person does not have the same needs. If nine people were healthy but one person had a medical problem, would it be fair for every person to get the same amount of medical attention? Obviously that would be absurd; and even a hardliner communist would admit that perfect equality should be abandoned with regards to medical care, since different people clearly have different needs.

But if individuals differ in their needs for medical care, how else might their needs differ? Perhaps one person only feels good after nine hours of sleep, while the others feel fine after seven. Is it then fair to ask all of them to sleep eight? Perhaps not. We can give the needy sleeper a special dispensation to sleep nine hours. But then won’t this person be doing less work then the rest? Isn’t that unfair too?

A trickier problem is distinguishing a need from a desire. We distinguish between the two quite strictly in our language, but in reality the difference is not so clear. To pick a silly example, if nine of the community prefer apples but one abhors apples and loves pears, this pear-lover will be doomed to constant gustatory dissatisfaction if all decisions with regard to the food supply are taken collectively. This sounds quite trivial, but the point is that different things make different people happy; thus giving every person the same thing, while fair with regard to supply, is possibly quite unfair in terms of satisfaction.

An additional possibility of unfairness is differential contribution. In a communist community, some people may work harder, innovate more, and keep scrupulously to the rules; others may not carry their weight, or may otherwise take advantage of the system. In sum, different people will contribute different amounts to the community. Some of this difference will be due to ability, and some to personality. In any case, it is arguably quite unfair that, whatever you put into the collective, you take out the same amount.

The above criticisms are not meant to discredit communism; rather, they are only meant to show that, even in ostensibly the most perfectly fair system, unfairness still exists. (Unfortunately, unfairness of some sort always exists.) As an individualist, I am not attracted by communism because I think people have different needs, desires, and abilities, and that society should reflect these differences; but this preference of mine is obviously of emotional and not philosophical character. In any case, I do not know of any successful large-scale, long-term societies that had a truly communist character (most ‘communist’ countries being so in name only); so I feel justified in moving on from communism as a possibility.

Let us return, therefore, to our ten people with their ten apples. They tried a military oligarchy, and there was a rebellion; then they tried communism, but they grew resentful and dissatisfied. Then somebody has a bright idea: Whoever picks the apple owns it. The picker can choose to eat it, store it, or give it away; but under no circumstances can another person take it without permission; and if anyone is caught stealing the thief will have to pay a three apple penalty. Our society just invented the right to private property. Thus we see the birth of rights as a tool for organizing society.

There is nothing natural or God-given about a right. Rights are privileges agreed upon by the community, and exist by consent of the community. Rights are ways of organizing what people can and cannot do, to ensure that each person has a clearly delineated sphere of free action that does not impinge upon those of others. In other words, rights restrict people’s freedom at the point at which their freedom interferes with the freedom of their neighbors. A right to kill would thus be logically absurd, since if you killed me you would have deprived me of my right to kill. In other words, exercising your right extinguished my ability to exercise mine. This clearly will not do. This is why murder, larceny, and rape cannot be made into rights: They cannot be made universal, since they are actions that by definition involve the violation of other people’s autonomy.

Limitations on people’s actions are only justified insofar as these limitations protect the freedom of others. Anything beyond this is unnecessary and therefore unjust. Thus a law against homicide is valid, but a law forbidding the eating of sesame seeds cannot be justified, since that action does not deprive anybody else of their liberty. The aim is to secure for each individual the biggest allowable range of mutually consistent actions. To accomplish this, it is more suitable to define rights negatively rather than positively. Rights, in other words, ought to be defined as freedoms from rather than freedoms to, in order to secure the maximum amount of available action. This is consistent with the principle that freedom should only be limited at the points at which they interfere with the freedoms of others, since the rights are defined as freedom from this interference.

We return, now, to our apple community. Things are going along quite well in this new system. Then something happens: a man breaks his leg, and thus cannot pick apples any more. He begins to starve, while his neighbors continue happily along. So one night he makes a proposal of a new rule: When a member of the community is hurt, the healthy members must donate a certain fraction of their food to support the injured person during their convalescence. Since anybody can get injured, the man argued, this rule could potentially benefit any one of them. The healthy members disagree with the proposal, arguing that contributing their own food to another person is an infringement of their rights.

Which party is correct? More broadly, I want to ask how disputes like these should be resolved, when members of the community differ in their preferences of rights.

To answer this, I will introduce a Hierarchy of Rights.

Rights can, I believe, be ordered into a hierarchy from more to less fundamental. The measure of a right’s importance is the degree of autonomy that the right entails. Thus the most fundamental right is to life, since without life no other rights can be enjoyed. The right to be free from taxation is, by comparison, less important, since the loss of autonomy suffered through starvation is greater than the that suffered through taxation.

In the above case, therefore, I think the just thing to do is to impose a tax to keep the injured man alive. Contrarily, if somebody wanted to tax the population to build a gigantic statue of himself, this should be rejected, since the freedom to use one’s own money is more fundamental a right than the freedom to build giant statues. Having money appreciably increases your autonomy, while having a giant statue does not, and autonomy is the measure of a right’s importance.

Let us apply this line of thinking to a contemporary problem: Gun Control. Constitutional problems aside, I think it is clear that gun regulation is justifiable within this system. If the freedom to buy an assault riffle is interfering with another person’s freedom from violent death, obviously the first must be curtailed in some way, since it is the less fundamental right. Regulating firearms is thus justifiable in the same way as instituting taxes for welfare programs.

This same line of thinking applies to many other areas of life. We regulate car speed because the right to drive as quickly as you like is superficial in comparison with the right to life; and we regulate the finance industry because the right to speculate on the markets is less important than the right to our own money. In short, some rights are more important than others, since they entail a greater degree of autonomy; and to protect these fundamental rights it is justifiable to limit other rights of less importance.

Failing to distinguish between the importance of different rights is a mistake that I have often encountered. Once, for example, I spoke with a libertarian who argued that everybody should be able to own nuclear weapons. He argued this because, being a libertarian, he thought everybody should have as much freedom as possible. But this fails to take into account that, without limits, your autonomy will at some point interfere with mine. Maximal freedom is simply impossible in a society. The idea of allowing citizens to buy nuclear weapons is an obvious example: If one person used a nuclear weapon, in a flash they would deprive millions of people of their lives, and thus all of their rights. Thus for the sake of protecting personal liberty—not to mention human life—it is necessary to prevent individuals from possessing weapons of this kind. In other words, libertarians should be in favor of limiting access to weapons, since weapons deprive people of their liberty.

Similarly absurd was the argument that gay people should not marry because it offended people’s religious sensibilities. The right to marry is a quite fundamental, being of great social, personal, and financial importance; while the right not to be offended is not a right at all. (Anybody can potentially get offended at anything, since being offended is an emotional reaction; thus it would lead to absurdities to try to ban everything that offends.) While I am at it, polyamorous marriages should also be legal, I think, so long as all the parties consented. In general, I do not see why the love lives of consenting adults should be regulated at all.

The only justification for regulating or banning something is that it could potentially deprive somebody of their autonomy. The highly addictive and dangerous nature of drugs like cocaine and heroine give a compelling case for regulation, since it is possible that the substances compromise people’s ability to choose freely. And if you influence me to try cocaine, and I get addicted, you will have compromised my autonomy just the same as if you’d stolen from me. (This is philosophically interesting territory: Should you have a right to choose to do something that might compromise your ability to choose? It’s a tricky problem, but I think there are good grounds for banning certain substances, both because they cause people to act in ways they regret and, through their repercussions to people’s health, create a strain on the public health system.)

Likewise, I think it is the right choice to regulate, but not to ban, cigarettes and alcohol, since the addictive nature of the first and the intoxicating effects of the second can compromise a person’s autonomy. In the case of marijuana, on the other hand, I think that it is absolutely unjust that it has been made illegal and that people have been jailed for its possession. It is not a powerful drug, and does not limit people’s autonomy to the degree that an absolute ban is justified. More generally, I think many of the laws surrounding drug possession in the United States are good examples of unjust laws—some of them banning substances with insufficient justification, others imposing unduly harsh penalties for crimes of a non-violent nature. (As I wrote elsewhere, punishments are only justifiable insofar as they act as effective deterrents.) But let me return to the main subject.

This hierarchy of rights conception is obviously quite abstract, and without deliberate care will not be put into practice. In the above case of the ten people, I doubt that the one injured party would be able to prevail upon the nine healthy ones to give up a fraction of their food. The poor fellow might starve.

This is a constant danger in any community: the tyranny of the powerful. The powerful might be a majority, a race, a sex, or a class. This is one reason why I think government is a necessary institution in any large community. I do not see, in other words, how justice could be enforced in an anarchic system; and for this reason I am generally hostile to anarchism. Without a government, what would prevent the strong from preying on the weak? An anarchist will easily retort that the government is far from an ethically perfect entity, and indeed the state has often become the very thing we need protection against. This is true, and to prevent this careful measures must be taken.

The strategy used in the United States is a model example: divide up the government’s powers between different branches, with checks and balances between them. A division of powers between different levels of government and regions of the country—in other words, Federalism—is also an excellent practical measure against state tyranny. All the powers of each branch of government must be made explicit in a constitution, thus making any breaches easy to detect. Periodic elections also help to hold the government accountable to the people, as well as to prevent any one individual from accumulating too much power. Sad to say, no government and no constitution will ever be immune to totalitarian impulses, which is why a free press and an active, vigilant citizenry are necessary for a healthy state. But this is an essay on justice, not a plan of government.

The most just societies are those that keep the hierarchy of rights most clearly in view. When a just government balances the right of one person to buy thirteen private jets against the rights of a beggar to have food and shelter, it always sides with the latter. In general, the more resources, power, and privilege you have, the more justifiable it becomes to curtail your rights to your own property with the aim of redistribution. This is the justification behind welfare, food stamps, and Medicare; this is the reason why we have a graduated income tax. If you have one billion dollars, it does not appreciably affect your autonomy to be deprived of a large percentage of your income. On the other hand, government welfare programs allow worse-off people to stay alive and to find work, which are fundamental to their autonomy.

The above sketch is my preferred solution to the problem of creating a standard of justice. A system of rights, ordered into a hierarchy, allows each citizen a definite sphere of autonomy. This is important, because I think every person should be an authority over themself. Nobody knows your needs and desires better than you do; thus you are the person who best knows how to secure your own livelihood and attain your own happiness. Allowing people to order their own lives is not only good for each person individually, but is also good for the society as a whole. When people can think for themselves and reap the benefits of their own innovations, it provides both the means and motivations for a thriving society.

The Musée D’Orsay and a Theory of Aesthetics

The Musée D’Orsay and a Theory of Aesthetics

Since moving to Europe, I have spent more time looking at art than probably my entire life before. The continent is simply stuffed with art—in cathedrals, churches, palaces, castles, or sometimes just sitting in the street. And of course I cannot neglect to mention Europe’s many art museums, many of them the best in the world.

IMG_3571

Perhaps my favorite of these (the only competition is the Prado) is the Musée d’Orsay in Paris. It is spectacular. Rarely can you find so many masterpieces in one place, arranged with such exquisite taste. In the middle runs a corridor, filled with statues—of human forms, mostly. They run, reach, dance, strain, twist, lounge, smile, laugh, gasp, grimace. Near the entrance is a bust of Goethe, his hair swept back, his enormous forehead exposed; and nearby is a model of the Statue of Liberty, standing serene and majestic on her pedestal.

IMG_3548

But for me, the real treat was the paintings. Every gallery was a feast for the eyes. There were naturalistic paintings, with a vanishing perspective, careful shadowing, precise brushstrokes, scientifically accurate anatomy, symmetrical compositions. There were the impressionists, a blur of color and light, creamy clouds of paint, glances of everyday life. There was Cézanne, whose simplifications of shape and shade lend his painting of Mont Sainte-Victoire a calm, detached beauty. Then there were the pointillists, Seurat and Signac, who attempted to break the world into pieces and then to build it back up using only dabs of color, arranged with a mixture of science and art.

IMG_3582

Greatest of all was van Gogh, whose violent, wavy lines, his bright, simple colors, his oil paint smeared in thick daubs onto the canvas, make his paintings slither and dance. It is simply amazing to me that something as static as a painting can be made be so energetic. Van Gogh’s paintings don’t stand still under your gaze, but move, vibrate, even breathe. It is uncanny.

IMG_3601.jpg
Pointilism

His self portrait is the most emotionally affecting painting I have ever seen. Wearing a blue suit, he sits in a neutral blue space; the air itself seems to be curling around him, as if in a torrent. The only colors that break the blur of blue are his flaming red beard and his piercing green eyes. He looks directly at the viewer, with an expression impossible to define. At first glance he appears anxious, perhaps shy; but the more you look, the more he appears calm and confident. You get absolutely lost in his eyes, falling into them, as you are absorbed into ever more complicated subtleties of emotion concealed therein. Suddenly you realize that curling waves of air around him are not mere background, but represent his inner turmoil. Yet is it a turmoil? Perhaps it is a serenity too complicated for us to understand?

IMG_3595

I looked and looked, and soon the experience became overwhelming. I felt as if he was looking right through me, while I pathetically tried to understand the depths of his mind. But the more I probed, the more lost I felt, the more I felt myself being subsumed into his world. The experience was so overpowering that my knees began to shake.

I left and sat down on a bench nearby. I was exhausted. By this time, I’d been walking for six hours; my feet were blistering, my legs were sore. Yet even though I was ragged, I felt magnificently alive. My every sense was on edge. My skin tingled, my ears twitched, my eyes took in every subtlety of line, color, and texture. I felt acutely sensitive to all my surroundings. And although my body was worn out, a kind of spiritual craving drove me onward. I had to see more.

I pushed myself to my feet, and then looked around. The museum was full with every kind of person. A man went by, pushing a stroller with a sleeping child, as a guide spoke to him in English; his wife strayed behind, snapping photo after photo with an old camera. To my right, two German girls were on their phones; to my left, an elderly French couple was having a rest on the bench. Everyone spoke in a hushed, respectful tone. They crowded around paintings, elbowing one another for space; they bent over to get a closer look, stroking chins. And the thought dawned on me that museums are really bizarre places.

Often I wonder what an alien visitor would think if she (or it?) observed us looking at art in a museum. I think the alien would find it totally incomprehensible. We pay good money in order to gain entrance to a big building, so we can spend time crowding around brightly colored squares that are not obviously more interesting than any other object. Indeed, I suspect the alien would find almost anything on earth—our plant and animal life, our minerals, our technology—more interesting than a painting. Perhaps the alien would conclude that this was a kind of religious ritual; considering how much respect we give to these objects, he might suspect that they represent gods. Or perhaps the alien would conclude that it is simply a form of mass psychosis?

I think the alien would be confused because human art caters to a human need—specifically, an adult human need. This is the need to cure ennui.

§

Boredom hangs over modern life like a specter, so pernicious because it cannot be grasped or seen. I often think of something Dostoyevsky said: “Man grows used to everything, the scoundrel!” This same sentiment was expressed, years later, by the French anthropologist Levi-Strauss, in his book Tristes Tropiques. He used to enjoy mountain scenes, because “instead of submitting passively to my gaze” the mountains “invited me into a conversation, as it were, in which we both had to give our best.” But as he got older, his pleasure in mountain scenery left him.

And yet I have to admit that, although I do not feel that I myself have changed, my love for the mountains is draining away from me like a wave running backward down the sand. My thoughts are unchanged, but the mountains have taken leave of me. Their unchanging joys mean less and less to me, so long and so intently have I sought them out. Surprise itself has become familiar to me as I follow my oft-trodden routes. When I climb, it is not among bracken and rock-face, but among the phantoms of my memories.

These two literary snippets have stuck with me because they encapsulate the same thing, the ceaseless struggle against the deadening weight of routine. Nothing is new twice. Walk through a park you found charming at first, the second time around it will be simply nice, and the third time just normal.

The problem is human adaptability. Unlike most animals, we humans are generalists, able to adapt our behavior to many different environments. Instead of possessing instincts, we form habits. By habits I do not only refer to things like biting your nails or eating pancakes for breakfast; rather, I mean all of the routine actions performed by every person in a society. Culture itself can, at least in part, be thought of as a collection of shared habits. These routines and customs are what allow us to live in harmony with our environments and one another. Our habits form a second nature, a learned instinct, that allows us to focus our attention on more pressing matters. If, for whatever reason, we were incapable of forming habits, we would be in a sorry state indeed, as William James pointed out in his book on psychology:

There is no more miserable human being than one in whom nothing is habitual but indecision, and for whom the lighting of every cigar, the drinking of every cup, the time of rising and going to bed every day, and the beginning of every bit of work, are subjects of express volutional deliberation. Full half the time of such a man goes to the deciding, or regretting, of matters which ought to be so ingrained in him as practically not to exist for his consciousness at all.

But there is a danger in this. Making the same commute, passing the same streets and alleys, spending time with the same friends, watching the same shows, doing the same work, living in the same house, day after day after day, can ingrain a routine in us so deeply that we become listless, even depressed. A habit is supposed to free our mind for more interesting matters; but we can also form habits of seeing, feeling, tasting, even of thinking, that are stultifying rather than freeing. The creeping power of routine, pervading our lives, can be difficult to detect, precisely because its essence is familiarity.

One of the most pernicious effects of routine is to dissociate us from our senses. Let me give a concrete example. A walk through New York City will inevitably present you with a chaos of sensory data. You can overhear conversations, many of them fantastically strange; you can see an entire zoo of people, from every corner of the globe, dressed in every fashion; you can look at the ways that the sunlight moves across the skyscrapers, the play of light and shadow; you can hear dog barks, car horns, construction, alarms, sirens, kids crying, adults arguing; you can smell bread baking, chicken frying, hot garbage, stale urine, and other scents too that are more safely left uninvestigated. This list only scratches the surface.

And yet, after working in NYC for a few months, making the same commute every day, I was able to block it out completely. I walked through the city without noticing or savoring anything. And any stray sound, sight, or smell that did float into my awareness was soon enough banished. I had stopped really looking at the city, and was only glancing at it. I was paying attention to my senses only insofar as they provided me with useful information: the location of a pedestrian, an oncoming car, an unsanitary area. My lunch went unappreciated; and my coffee was not enjoyed. The changing seasons went unremarked; the fashion choices of my fellow commuters went unnoticed. It isn’t that I stopped seeing, feeling, hearing, tasting, but that my attitude to this information had changed.

This exemplifies what I mean by ennui. It is not boredom of the temporary sort, such as you experience when waiting in line at the DMV; it is boredom as a spiritual malady. It is when we are not bored by a particular situation, but by any situation. It is caused by a certain attitude toward our senses. When afflicted by ennui, we stop treating our sensations are things in themselves, worthy of attention and appreciation, but merely as signs and symbols of other things.

To a certain extent, we all do this, often for good reason. When you are reading this, for example, you are probably not paying attention to the details of the font, but are simply glancing at the words to understand their meaning. Theoretically, I could use any font or formatting, and it wouldn’t really affect my message, since you are treating the words as signs and not as things in themselves.

This is our normal, day to day attitude towards language, but it can also blind us to what is right in front of us. For example, an English teacher I knew once expressed surprise when I pointed out that ‘deodorant’ consists of the word ‘odor’ with the prefix ‘de-’. She had never even thought of it. And just recently I finally had the realization that the word ‘freelance’ must come from mercenary soldiers (a free lance). These examples are trivial enough, but I think they well illustrate how estranged we can be from our day to day realities, and how treating things as symbols prevents us from giving them their proper scrutiny.

I think this attitude of ennui can extend even to our senses. We see the subtle shades of green and red on an apple’s surface, and only think “I’m seeing an apple.” We feel the waxy skin, and only think “I’m touching an apple.” We take a bite, munching on the crunchy fruit, tasting the tart juices, and only think “I’m eating an apple.” In short, the whole quality of the experience is ignored or at least underappreciated. The apple has become part of our routine and has thus been moved to the background of our consciousness.

Now, imagine treating everything this way; imagine if all the sights, sounds, tastes, textures, and smells were treated as routine. This is an adequate description of my mentality when I was working in New York, and perhaps of many people all over the world. The final effect is a feeling of emptiness and dissatisfaction. Nothing fulfills or satisfies because nothing is really being experienced.

This is where art comes in. Good art has the power to, quite literally, bring us back to our senses. It encourages us not only to glance, but to see; not only to hear, but to listen. It reconnects us with what is right in front of us, but is so often ignored. To quote the art critic Robert Hughes, the purpose of art is “to make the world whole and comprehensible, to restore it to us in all its glory and occasional nastiness, not through argument but through feeling, and then to close the gap between you and everything that is not you.” (As I’ll explain later, I don’t quite agree that art doesn’t use argument.)

Last summer, while I was still working at my job in NYC, I experienced the power of art during a visit to the Metropolitan. By then, I had already visited the Met dozens of times in my life. My dad used to take me there as a kid, to see the medieval arms and armor; and ever since I have visited at least once a year. The samurai swords, the Egyptian sarcophagi, the Greek statues—it has tantalized my imagination for decades.

In my most recent visits, however, the museum had lost much of its power. It had become routine for me. I had seen everything so many times that, like Levi-Strauss, I was visiting my memories rather than the museum itself. But this changed during my last visit.

It was the summer right before I came to Spain. I had just completed my visa application and was about to leave my job. This would be my last visit to the Met for at least a year, possibly longer. In short, I felt emotional. I was saying goodbye to something intimately familiar in order to embrace the unknown. This heightened emotional state made me experience the museum in an entirely new way.

IMG_0261
Statue of Marcus Aurelius from the Metropolitan Museum of Art

Somehow, the patina of familiarity had been peeled away, leaving every artwork fresh and exciting. No longer did the exhibitions on the ancient world represent merely archaeological artifacts; the objects were now powerful works of art. I began noticing things I hadn’t before: I observed the grains in the stone used in Egyptian statues. I tried to imagine the amount of skill, patience, and time it would have taken to sculpt the folds on a Roman toga. I mentally compared the styles used in Greek and Hindu sculptures of gods, wondering what it said about their cultures. In short, I stopped treating the artwork as icons—as mere symbols of a lost age—but as genuine works of art.

This experience was so intense that for several days I felt rejuvenated. I stopped feeling so deeply dissociated from my world at work, and began to take pleasure again in little things. While waiting for the elevator, for example, I looked at a nearby wall; and I realized, to my astonishment, that it wasn’t merely a flat plain surface, as I had thought, but was covered in little bumps and shapes. It was stucco. I grew entranced by the shifting patterns of forms on the surface. I leaned closer, and began to see tiny cracks and little places where the paint had chipped off. The slight variations on the surface, a stain here, a splotch there, the way the shapes seemed to melt into one another, made it seem as though I were looking at a painting by Jackson Pollock or the surface of the moon.

I had glanced at this wall a hundred times before, but it took a visit to an art museum to let me really see it. Routine had severed me from the world, and art had brought me back to it.

§

Reality is always experienced through a medium—the medium of senses, concepts, language, and thought. Sensory information is detected, broken down, analyzed, and then reconfigured in the brain. While a microphone might simply detect tones, rhythms, and volume, we hear cars, birds, and speech; and while a camera might detect shapes, colors, and movement, we see houses and street signs. The difference between these machines and us is not the information we take in, but what we do with it.

In order to deal efficiently with the large amount of information we encounter every day, we develop habits of perceiving and thinking. These habits are partly expectations of the kinds of things we will meet (people, cars, language), as well as the ways we have learned to analyze and respond to these things. These habits thus lay at the crossroads between the external world of our senses and the internal world of our experience, forming another medium through which we experience (or don’t experience) reality.

Good art forces us to break these habits, at least temporarily. It does so by breaking down reality and then reconstructing it with a different principle—or perhaps I should say a different taste—than the one we habitually use.

The material of art—what artists deconstruct and re-imagine—can be taken from either the natural or the cultural world. By ‘natural world’ I mean the world as we experience it through our senses; and by ‘cultural world’ I mean the world of ideas, customs, values, religion, language, tradition. No art is wholly emancipated from tradition, just as no tradition is wholly unmoored from the reality of our senses. But very often one is greatly emphasized at the expense of the other.

A good example of an artform concerned with the natural world is landscape painting. A landscape artist is quite clearly breaking down what she sees into shapes and colors, and putting it together on her canvass, making whatever tasteful alteration she sees fit. Of course, no landscape painter lives in isolation. Inevitably our painter is familiar with a tradition of landscape paintings, and is thus simultaneously engaged in a dialogue with contemporary and former artists. She is, therefore, simultaneously breaking down the landscape and her tradition of landscape painting, deciding what to change, discard, or keep. The final product emerges as the an artifact of an exchange between the artist, the landscape, and the tradition.

IMG_4330
From the Thyssen-Bornemisza in Madrid

The fact remains, however, that the final product can be effectively judged by its fidelity to its subject—the landscape itself. Thus I would say that landscape paintings are primarily oriented towards the natural world. By contrast, many religious paintings are much more oriented towards a tradition. It is clear, even from a glance, that the artists of the Middle Ages were not concerned with the accurate portrayal of individual humans, but with the evoking of religious figures through idealizations. The paintings thus cannot be evaluated by their fidelity to the sensory reality, but by their fidelity to a religious aesthetic.

IMG_1519
From the Museo de Bellas Artes in Valencia

Parenthetically, it is worth noting that artworks oriented towards the natural world tend to be individualistic, while artworks oriented towards the cultural world tend to be communal. The reason is clear: art oriented towards the natural world reconnect us with our senses, and our senses are necessarily personal. By contrast, culture is necessarily impersonal and shared. The rise of perspective, realistic anatomy, individualized portraits, and landscape painting at the time of the Italian Renaissance can I think persuasively be interpreted as a break from the communalism of the medieval period and an embrace of individualism.

And where does literature fit into all this? To answer that question, let us stand in front of the portrait of Marcel Proust by Jacques-Émile Blanche, which hangs in the Musée d’Orsay. Proust stands before a black background, dressed in an equally black tuxedo. Amid this darkness, his pale face seems to shine like the moon. He has round and soft features, and appears somewhat delicate and frail, perhaps sickly. He looks rather like a boy impersonating a man, with a small, thin mustache feebly clinging to his upper lip. In truth, he isn’t much to look at. His face is neither handsome nor compelling. But once you have read some of his fiction, you will realize that beneath this unremarkable exterior is an extraordinary mind.

IMG_3560

Proust’s great novel, In Search of Lost Time, exemplifies everything literature is supposed to be. First, it reconnects us with our own language. Proust’s long, twisting, and exquisite sentences require patience to unravel. They can often be frustrating, since by the time you’ve reached the end you have entirely forgotten the beginning. More than that, they are just strange. Nobody but Proust ever wrote like Proust: 

Even the simple act which we describe as “seeing some one we know” is, to some extent, an intellectual process. We pack the physical outline of the creature we see with all the ideas we have already formed about him, and in the complete picture of him which we compose in our minds those ideas have certainly the principal place. In the end they come to fill out so completely the curve of his cheeks, to follow so exactly the line of his nose, they blend so harmoniously in the sound of his voice that these seem to be no more than a transparent envelope, so that each time we see the face or hear the voice it is our own ideas of him which we recognize and to which we listen.

Proust’s strangeness extends beyond the realm of language, of course. He was an astonishingly sensitive and delicate person. He spent hours analyzing his emotions, noting all the tiny, fleeting feelings we normally ignore. He investigated the role memory places in perception, and how the flow of time is experienced. Added to that, Proust spent much time studying the manners and customs of his milieu. The interplay of personalities at soirees, the complex social cues, the jockeying for power among members of high society—all this formed the material for his great novel. In the course of this, he described for us many memorable characters. One of my favorites is the Baron de Charlus, a whimsical dandy prone to fits of rage. This character was, apparently, modeled on Robert de Montesquiou, whose portrait fittingly hangs next to Proust’s own.

IMG_3562

So literature not only reconnects us with language, but with human life. For example, many have said that people wouldn’t fall in love if it weren’t for love stories. Not that writers invented love, but that without writers love might be simply ignored. Good love stories get us to pay special attention to romantic feelings, to attach to them a heightened importance, and to savor the exhilaration of a love affair. Love is a good example, because it is both a psychological and a social phenomenon; it is the experience of a feeling, but also the exchange of personalities, with all the difficulties and subtleties involved therein.

Of course literature does not confine itself to romantic love, but deals with every human feeling. Good novels bring us back to the basic stuff of human relationships, to envy, resentment, tension, desire, and all the other emotions to complex to name. My favorite novels explore the fraught relationship between these feelings and our social environments, how the inner world pushes against the outer and vice versa.

But what about music? In visual art and fiction, wherein real things are depicted, the relationship between a work of art and the real world seems clear, at least relatively. In music, however, the artwork can seem totally disconnected from the reality of our senses. Sonatas and ballads don’t rearrange or represent our soundscape in any obvious way. A symphony is not normally made in a dialogue with the sounds of cars sputtering and people shouting. You cannot play a portrait on the saxophone.

Musicians (at least western musicians) take their material from the cultural rather than the natural world, from the world of tradition rather than the world of our senses. This is because sound is just too abstract. With only a pencil and some paper, most people could make a rough sketch of an everyday object; but without rigorous training—and even then, maybe not—most people could not transcribe an everyday sound, like a bird’s chirping.

To deal with this problem, rigorous and formal ways of creating and classifying sounds were invented. A tradition develops with its own laws and rules; and it is these laws and rules that the composer manipulates. Just as you’ve seen many trees and human faces, and thus can appreciate how painters re-imagine their appearances, so have you heard hours and hours of music in your life, most of it following the same or similar conventions. Thus you can tell (unconsciously, perhaps) when a tune does something unusual. Not many people, for example, can define a plagal cadence (a cadence from the IV to the I chord), but almost everyone responds to it in Paul McCartney’s “Yesterday.”

If music is primarily oriented towards tradition and not nature, does listening to music help us to reconnect us with our senses? I think so. Although the main variables under a composer’s control are cultural products like melody, harmony, and rhythm—qualities that are not so apparent in non-musical noise—good musicians must also pay attention to pitch and intensity, qualities that can be found in any sound. Speaking from my own experience, after I became very interested in music in high school, I did become more acutely sensitive to everyday sounds around me, since listening intensely to music trained me to focus my attention on my ears.

Nevertheless, it may well be true that music helps more to connect us with the social than the natural world. After all, music is an inherently social art form. We seldom experience music in solitary contemplation, like we do with paintings or books, but more often with friends and family, or with perfects strangers at a concert. Groups of musicians are much more common than solo acts, and music is still quite oriented around live performances. Added to that, music is an integral part of many social rituals—political, religious, or otherwise. Whether we are graduating from high school, winning an Oscar, or swearing in a president, music will certainly be heard. And I don’t think it’s a coincidence that energetic music is necessary for any good party; as much as alcohol, music can lower inhibitions by creating a sense of shared community. Music thus plays a different role than visual art, connecting us to our social environment rather than to the often neglected sights and sounds of everyday life.

IMG_4132
Monument to Beethoven in the Tiergarten in Berlin

While I’m at it, I want to make the case for philosophy as a kind of art. Philosophy is art when it uses argument to undermine our everyday assumptions, forcing us, often against our will, to confront the thin foundations of our mental worlds. Good philosophy forces us to pay attention to arguments, logic, inferences, deductions and inductions, premises, conclusions, explanations, assumptions. Philosophy is art when it makes us stop taking ideas for granted. Instead of a reconnection to our senses, we get a reconnection to our concepts. I am, of course, not suggesting that this is the only or even the primary function of philosophy. But this artistic function of philosophy is nowadays, especially in analytic circles, often overlooked.

IMG_3613

The above descriptions are offered only as illustrations of my more general point: art occupies the same space as our habits, the gap between the external and the internal world. Painters, composers, and writers begin by breaking down something familiar from our daily reality. The material can be shapes, colors, ceramic vases, window panes, the play of shadow across a crumpled robe; it can be melodies, harmonies, timbre, volume, chord progressions, stylistic tropes; it can be adjectives, verbs, nouns, situations, gestures, personality traits.

Whatever the starting material, it is the artist’s job to recombine it into something different, something that thwarts our habits. Van Gogh’s thick daubs of paint thwart our expectation of neat brushstrokes; McCartney’s plagal cadence thwarts our expectation of a perfect cadence; and Proust’s long, gnarly sentences and philosophic ideas thwart our expectations of how a novelist will write. And once we stop seeing, listening, feeling, sensing, thinking, expecting, reacting, behaving out of habit, and once more turn our fill attention to the world, naked of any preconceptions, we are in the right mood to appreciate art.

§

But it is not enough to be simply challenging. If this were true, art would be anything that was simply strange, confusing, or difficult. Good art can, of course, be all of those things, but need not be. Many artists nowadays, however, seem to disagree about this. I have listened to several works by contemporary composers which simply made no sense for my ears, and have seen many works of modern art that were completely uninteresting to look at. I even have trouble appreciating many parts Joyce’s of Ulysses, but doubtless this will mark me out as a philistine to many.

To a large extent, to be sure, this is just a matter of taste. But this does not mean that any and all statements about aesthetics are valueless. Deciding who to be friends with is also a matter of taste; but whether somebody is kind or cruel, polite or rude, pleasant or unpleasant, can often be agreed on by most people. Certain qualities in people are almost universally admired: courage, intelligence, virtue, commitment, originality, sincerity. Aristotle made a list of these qualities about 2,500 years ago, and they remain largely unchanged today. Similarly, I think that the qualities of good art can be roughly described. These qualities will not be formal. I think it is folly to say that all good paintings must look like such and such, and all well written sentences must be written like so and so. What formal qualities are embraced or shunned is largely a matter of fashion. But the measure of success or failure is, I think, far more general.

The experience of good art can be compared with a deep conversation with a friend—a friend perhaps older and wiser, but a friend nonetheless. You empathize with the friend, and they empathize with you. You see one another’s point of views, and disagree without malice when your views don’t coincide. You teach each other and learn from each other. You hold each other to a high standard. Just as a friend is not silent when you do something wrong, an artist is not silent when he sees something wrong with his society. You are honest, but not blunt. You are, in short, equal partners.

Pretentious art, art that merely wants to challenge, confuse, or frustrate you, is quite a different story. It can be most accurately compared to the relationship between an arrogant schoolmaster and a pupil. The artist is talking down to you from a position of heightened knowledge. The implication is that your perspective, your assumptions, your way of looking at the world are flawed and wrong, and the artist must help you to get out of your lowly state. Multiple perspectives are discouraged; only the artist’s is valid. Or perhaps some art can be better compared to a kind of know-it-all at a party, who flaunts his knowledge rather than shares it, who talks for himself rather than for others.

And then we come to simple entertainment. Entertainment, such as blockbusters, most pop music, and in general the majority of what’s on the internet, can be compared to the class clown. He does nothing but tell jokes and play pranks; he is quick witted and clever; he can get the entire room crying with laughter. Now, there isn’t anything wrong with this clown—indeed, he can be extremely pleasant—but he doesn’t add anything permanent to your life. You laugh in his presence, and forget him in his absence. You cannot learn from him, since is not concerned with knowledge but only in hilarity. You cannot empathize with him, since you don’t know much about him and he knows nothing about you. By the way, I don’t mean to imply that all comedy is lowly or that art can’t be funny; to the contrary, one of my favorite living artists is Louis C.K. I only mean that pure entertainment is quite different from genuine art.

Perhaps the most emblematic form of pure entertainment is advertizing. However well made an advertisement can be, it can never be art; for its goal is not to reconnect with the world, but to seduce us with fantasy. Advertisements tell us we are incomplete. Instead of showing us how we can be happy now, they tell what we still need. When you see an ad in a magazine, for example, you are not meant to scan it carefully, paying attention to the purely visual qualities. Rather, you are forced to view it as an image. By ‘image’ I mean a picture that serves to represent something else. Images are not meant to be looked at, but glanced at; images are not meant to be analyzed, but understood. Ads use images because they are not trying to bring you back to your senses, but lure you into a fantasy.

Don’t misunderstand me: There is nothing inherently wrong with fantasy; indeed, I think fantasy is almost indispensable to a healthy life. However, the fantasies of advertisements are somewhat nefarious, because ads are never pure escapism. Rather, the ad forces you to negatively compare your actual life with the fantasy, conclude that you are lacking something, and then of course seek to remedy the situation by buying their product.

Most entertainment is, however, quite innocent, or at least it seems to me. For example, I treat almost all blockbusters as pure entertainment. I will gladly go see the new Marvel movie, not in order to have an artistic experience, but because it’s fun. The movie provides two hours of relief from the normal laws of physics, of probability, from the dreary regularities of reality as I know it. Superhero movies are escapism at its most innocent. The movies make no pretenses of being realistic, and thus you can hardly feel the envy caused by advertisements. You are free to participate vicariously and then to come back to reality, refreshed from the diversion, but otherwise unchanged.

The prime indication of entertainment is that it is meant to be effortless. The viewer is not there to be challenged, but to be diverted. Thus most bestselling novels are written with short words, simple sentences, stereotypical plotlines stuffed full of clichés—because this is easy to understand. The books aren’t meant to be analyzed, but to be read quickly and then forgotten. Likewise, popular music uses common chord progressions and trite lyrics to make hits—music to dance to, to play in the background, to sing along to, but not to think about. This is entertainment: it does not reconnect us with our senses, our language, our ideas, but draw us into fantasy worlds, worlds with spies, pirates, vampires, worlds where everyone is attractive and cool, where you can be anything you want, for at least a few hours.

Some thinkers, most notably Theodor Adorno, have considered this quality of popular culture to be nefarious. They abhor the way that people lull their intellects the sleep, tranquilized with popular garbage that deactivates their minds rather than challenges them. And this point cannot be wholly dismissed. But I tend to see escapism in a more positive light; people are tired, people are stressed, people are bored—they need some release. As long as fantasy does not get out of hand, becoming an goal in itself instead of only a diversion, I see no problem with it.

This is the difference between art and entertainment. And what about craft? Craft is a dedication to the techniques of art, rather than its goals. Of course, there is hardly such a thing as a pure craft or a pure art; no artist completely lacks a technique, and no craftsman totally lacks aesthetic originality. But there are certainly cases of artists whose technique stands at a bare minimum (think punk rock), as well as craftsmen who are almost exclusively concerned with the perfection of technique.

Here I must clarify that, by technique, I do not mean simply manual things like brush strokes or breath control. I mean more generally the mastery of a convention. Artistic conventions consists of fossilized aesthetics. All living aesthetics represent the individual visions of artists—original, fresh, and personal. All artistic conventions are the visions of successful artists, usually dead, which have ceased to be refreshing and now have become charmingly familiar. Put another way, conventional aesthetics are the exceptions that have been made the rule. In the Renaissance, the use of perspective, the depiction of Greco-Roman figures (as opposed to Christian ones) and the use of realistic anatomy were wonderfully new. But by the mid-nineteenth century, these conventions had grown stale and tired.

This can be exemplified if we go and examine the paintings of William-Adolfe Bourgeureau in the Musée d’Orsay. Even from a glance, we can tell that he was a masterful painter. Every detail is perfect. The arrangement of the figures, the depiction of light and shadow, the musculature, the perspective—everything has been performed with exquisite mastery. My favorite painting of his is Dante and Virgil in Hell, a dramatic rendering of a scene from Dante’s Inferno. Dante and his guide stand to one side, looking on in horror as one naked man attacks another one, biting him in his throat. In the distance, a flying demon smiles, while a mound of tormented bodies writhes behind. The sky is a fiery red and the landscape is bleak.

IMG_3553

It’s a wonderful painting, I think, but it seems to exist more as a demonstration than as art. For the main thing that makes painting art, and the main thing this painting lacks, is an original vision. The content has been adopted straightforwardly from Dante. The technique, although perfectly executed, shows no innovations of Bourgeureau’s own. All the tools he used had been used before; he merely learned them. Thus the painting, however impressive, ultimately seems like a technical exercise.

Bourgeureau represents the culmination of what is called the ‘academic style’—a style that many, even in Bourgeurea’s day, found to be totally exhausted. It was against this type of technical mastery and artistic sterility that the impressionists were rebelling. They sensed, correctly I think, that they could no longer create genuine art through refinements of the convention, but had to break more radically.

And how did the impressionists respond? By going back to their senses. They realized that the perspective, although created to add realism to paintings, now served to separate paintings from everyday life. For, to quote Robert Hughes again:

Essentially: perspective is a form of abstraction. It simplifies the relationship between eye, brain and object. It is an ideal view, imagined as being seen by a one-eyed, motionless person who is clearly detached from what he sees. It makes a God of the spectator, who becomes the person on whom the whole world converges, the Unmoved Onlooker.

The impressionists tried something new. They tried to represent, not what it feels to be an Unmoved Onlooker, but to be a part of a scene, your eyes adjusting to the light, blinking from the wind, turning your head this way and that. They stopped portraying imaginary, mythological figures, but pedestrians, city streets, train stations, country picnics. In the process, they not only developed a new way of painting, but of seeing.

IMG_4328
From the Thyssen-Bornemisza in Madrid

§

I fear I have said more about what art isn’t than what it is. That’s because it is admittedly much easier to define art negatively than positively. Just as mystics convey the incomprehensibility of God by listing all the things He is not, maybe we can do the same with art?

Here is my list so far. Art is not entertainment, meant to distract with fantasy. Art is not craft, meant to display technique and obey rules. Art is not simply an intellectual challenge, meant to shock and frustrate your habitual ways of being. I should say art is not necessarily any of these things, though it can and often is all of them. Indeed, I would contend that the greatest art entertains, challenges, displays technical mastery, but cannot be reduced to any or all of these things.

Here I wish to take an idea from the literary critic Harold Bloom, and divide up artworks into period pieces and great works. Period pieces are works that are highly effective in their day, but quickly become dated. These works are too specifically targeted at one specific cultural atmosphere to last. In other words, they may be totally preoccupied with the habits prevalent at one place and time, and become irrelevant when time passes. To pick just one example, Sinclair Lewis’s Babbitt, which I sincerely loved, may be too engrossed in the foibles of 20th century American culture to be relevant to generations to come. Its power comes from its total evisceration of American ways, and luckily for Lewis that culture has changed surprisingly little in its essentials since his day. The book’s continuing appeal therefore depends largely on how much the culture does or doesn’t change.

Thus period pieces largely concern themselves with getting us to question certain habits or assumptions. The greatest works of art, by contrast, are great precisely because they reconnect us with the mystery of the world. They don’t just get us to question certain assumptions, but all assumptions. They bring us face to face with the incomprehensibility of life, the great and frightening chasm that we try to bridge over with habit and convention. No matter how many times we watch Hamlet, we can never totally understand Hamlet’s motives, the mysterious inner workings of his mind. No matter how long we stare into van Gogh’s eyes, we can never penetrate the machinations of that elusive mind. No matter how many times we listen to Bach’s Art of Fugue, we can never wrap our minds around the dancing, weaving melodies, the baffling mixture of mathematical elegance and artistic sensitivity.

Why are these works so continually fresh? Why do they never seem to grow old? I cannot say. It is as if they are infinitely subtle, allowing you to discover new shades of meaning every time they are experienced anew. You can fall into them, just as I felt myself falling into van Gogh’s eyes as he stared at me across space and time.

When I listen to Bach, read Shakespeare, and when I look into van Gogh’s eyes, I feel like I do when I stare into the starry sky: absolutely small in the presence of something immense and immensely beautiful. Listening to Bach is like listening to the universe itself, and reading Shakespeare is like reading the script of the human soul. These works do not only reconnect me to my senses, helping me to rid myself of boredom. They do not only remind me that the world is an interesting place. Rather, these works remind me that I myself am small and insignificant, and should be thankful for every second of life, for it is a privilege to be alive somewhere so beautiful and mysterious.

On Morality

On Morality

What does it mean to do the right thing? What does it mean to be good or evil?

These questions have perplexed people since people began to be perplexed about things. They are the central questions of one of the longest lines of intellectual inquiry in history: ethics. Great thinkers have tackled it; whole religions have been based around it. But confusion still remains.

Well perhaps I should be humble before attempting to solve such a momentous question, seeing who have come before me. And indeed, I don’t claim any originality or finality in these answers. I’m sure they have been thought of before, and articulated more clearly and convincingly by others (though I don’t know by whom). Nevertheless, if only for my own sake I think it’s worthwhile to set down how I tend to think about morality—what it is, what it’s for, and how it works.

I am much less concerned in this essay with asserting how I think morality should work than with describing how it does work—although I think understanding the second is essential to understanding the first. But to begin, I want to examine some of the assumptions that have characterized earlier conceptualizations, particularly with regard to freedom.

Most thinkers begin with a free individual contemplating multiple options. Kantians think that the individual should abide by the categorical imperative and act with consistency; Utilitarians think that the individual should attempt to promote happiness with her actions. What these systems disagree about is the appropriate criterion. But they do both assume that morality is concerned with free individuals and the choices they make. They disagree about the nature of Goodness, but agree that Goodness is a property of people’s actions, making the individual in question worthy of blame or praise, reward or punishment.

The Kantian and Utilitarian perspectives both have a lot to recommend them. But they do tend to produce an interesting tension: the first focuses exclusively on intentions while the second focuses exclusively on consequences. Yet surely both intentions and consequences matter. Most people, I suspect, wouldn’t call somebody moral if they were always intending to do the right thing and yet always failing. Neither would we call somebody moral if they always did the right thing accidentally. Individually, neither of these systems captures our intuitive feeling that both intentions and consequences are important; and yet I don’t see how they can be combined, because the systems have incompatible intellectual justifications.

But there’s another feature of both Kantian and Utilitarian ethics that I do not like, and it is this: Free will. The systems presuppose individuals with free will, who are culpable for their actions because they are responsible for them. Thus it is morally justifiable to punish criminals because they have willingly chosen something wrong. They “deserve” the punishment, since they are free and therefore responsible for their actions.

I’d like to focus on this issue of deserving punishment, because for me it is the key to understanding morality. By this I mean the notion that doing ill to a criminal helps to restore moral order to the universe, so to speak. But before I discuss punishment I must take a detour into free will, since free will, as traditionally conceived, provides the intellectual foundation for this worldview.

What is free will? In previous ages, humans were conceived of as a composite of body and soul. The soul sent directions to the body through the “will.” The body was material and earthly, while the soul was spiritual and holy. Impulses from the body—for example, anger, lust, gluttony—were bad, in part because they destroyed your freedom. To give into lust, for example, was to yield to your animal nature; and since animals aren’t free, neither is the lustful individual. By contrast, impulses from the soul (or mind) were free because they were unconstrained by the animal instincts that compromise your ability to choose.

Thus free will, as it was originally conceived, was the ability to make choices unconstrained by one’s animal nature and by the material world. The soul was something apart and distinct from one’s body; the mind was its own place, and could make decisions independently of one’s impulses or one’s surroundings. It was even debated whether God Himself could predict the behavior of free individuals. Some people held that even God couldn’t, while others maintained that God did know what people would or wouldn’t do, but God’s knowledge wasn’t the cause of their doing it. (And of course, some people believed in predestination.)

It is important to note that, in this view, free will is an uncaused cause. That is, when somebody makes a decision, this decision is not caused by anything in the material world as we know it. The choice comes straight from the soul, bursting into our world of matter and electricity. The decision would therefore be impossible to predict by any scientific means. No amount of brain imaging or neurological study could explain why a person made a certain decision. Nor could the decision be explained by cultural or social factors, since individuals, not groups, were responsible for them. All decisions were therefore caused by individuals, and that’s the essence of freedom.

It strikes me that this is still how we tend to think about free will, more or less. And yet, this view is based on an outdated understanding of human behavior. We now know that human behavior can be explained by a combination of biological and cultural influences. Our major academic debate—nature vs. nurture—presupposes that people don’t have free will. Behavior is the result of the way your genes are influenced by your environment. There is no evidence for the existence of the soul, and there is no evidence that the mind cannot be explained through understanding the brain.

Furthermore, even without the advancements of the biological and social sciences, the old way of viewing things was not philosophically viable, since it left unexplained how the soul affects the body and vice versa. If the soul and the body were metaphysically distinct, how could the immaterial soul cause the material body to move? And how could a pinch in your leg cause a pain in your mind? What’s more, if there really was an immaterial soul that was causing your body to move, and if these bodily movements truly didn’t have any physical cause, then it’s obvious that your mind would be breaking the laws of physics. How else could the mind produce changes in matter that didn’t have any physical cause?

I think this old way of viewing the body and the soul must be abandoned. Humans do not have free will as originally conceived. Humans do not perform actions that cannot be scientifically predicted or explained. Human behavior, just like cat behavior, is not above scientific explanation. The human mind cannot generated uncaused causes, and does not break the laws of physics. We are intelligent apes, not entrapped gods.

Now you must ask me: But if human behavior can be explained in the same way that squirrel behavior can, how do we have ethics at all? We don’t think squirrel are capable of ethical or unethical behavior because they don’t have minds. We can’t hold a squirrel to any ethical standard and we therefore can’t justifiably praise or censor a squirrel’s actions. If humans aren’t categorically different then squirrels, than don’t we have to give up on ethics altogether?

This is not justified. Even though I think it is wrong to say that certain people “deserve” punishment (in the Biblical sense), I do think that certain types of consequences can be justified as deterrents. The difference between humans and squirrels is not that humans are free, but that humans are capable of thinking about the long term consequences of an action before committing it. Individuals should be held accountable, not because they have free will, but because humans have a great deal of behavioral flexibility, thus allowing their behavior to be influenced by the threat of prison.

This is why it is justifiable to lock away murderers. If it is widely known among the populace that murderers get caught and thrown into prison, this reduces the number of murders. Imprisoning squirrels for stealing peaches, on the other hand, wouldn’t do anything at all, since the squirrel community wouldn’t understand what was going on. With humans, the threat of punishment acts as a deterrent. Prison becomes part of the social environment, and therefore will influence decision-making. But in order for this threat to act as an effective deterrent, it cannot be simply a threat; real murderers must actually face consequences or the threat won’t be taken seriously and thus won’t influence behavior.

To understand how our conception of free will affects the way we organize our society, consider the case of drug addiction. In the past, addicts were seen as morally depraved. This was a direct consequence of the way people thought about free will. If people’s decisions were made independently of their environment or biology, then there was no excuses or mitigating circumstance for drug addicts. Addicts were simply weak, depraved people who mysteriously kept choosing self-destructive behavior. What resulted from this was the disastrous war on drugs, a complete fiasco. Now we know that it is absurd to throw people into jail for being addicted, simply absurd, because addicts are not capable of acting otherwise. This is the very definition of addiction, that one’s decision-making abilities have been impaired.

As we’ve grown more enlightened about drug addiction, we’ve realized that throwing people in jail doesn’t solve anything. Punishment does not act as an effective deterrent when normal decision-making is compromised. By transitioning to a system where addiction is given treatment and support, we have effectively transitioned from an old view of free will to the new view that humans behavior is the result of biology, environment, and culture. We don’t hold them “responsible” because we know it would be like holding a squirrel responsible for burying nuts. This is a step forward, and it has been taken by abandoning the old views of free will.

I think we should apply this new view of human behavior to other areas of criminal activity. We need to get rid of the old notions of free will and punishment. We must abandon the idea of punishing people because they “deserve” it. Murderers should be punished, but not because they deserve to suffer, but for the following two reasons: first, because they have shown themselves to be dangerous and should be isolated; and second, because their punishment helps to act as a deterrent to future murderers. Punishment is just only insofar as these two criteria are met. Once a murderer is made to suffer more than is necessary to deter future crimes, and is isolated more than is necessary to protect others, then I think it is unjustifiable and wrong to punish him further.

In short, we have to give up on the idea that inflicting pain and discomfort on a murderer helps to restore moral balance to the universe. Vengeance in all its forms should be removed from our justice system. It is not the job of us or anyone else to seek retributions for wrongs committed. Punishments are only justifiable because they help to protect the community. The aim of punishing murderers is neither to hurt nor to help them, but to prevent other people from becoming murderers. And this is, I think, the reason why the barbarous methods of torture and execution are wrong, because I very much doubt that brutal punishments are justified in terms of further efficacy in deterrence. However, I’m sure there is interesting research somewhere on this.

Seen in this way, morality can be understood in the same way we understand language—as a social adaptation that benefits the community as a whole as well as individual members of the community. Morality is a code of conduct imposed by the community on its members, and derivations from this code of conduct are justifiably punished for the safety of the other members of the community. When this code is broken, a person forfeits the protection under the code, and is dealt with in such a way that future derivations from the moral code are discouraged.

Just as Wittgenstein said that a private language is impossible, so I’d argue that a private morality is impossible. A single, isolated individual can be neither moral nor immoral. People are born with a multitude of desires; and every desire is morally neutral. A moral code comes into play when two individuals begin to cooperate. This is because the individuals will almost inevitably have some desires that conflict. A system of behavior is therefore necessary if the two are to live together harmoniously. This system of behavior is their moral code. In just the same way that language results when two people both use the same sounds to communicate the same messages, morality results when two people’s desires and actions are in harmony. Immorality arises when the harmonious arrangement breaks down, and one member of the community satisfies their desire at the expense of the others. Deviations of this kind must have consequences if the system is to maintain itself, and this is the justification for punishment.

One thing to note about this account of moral systems is that they arise for the well-being of their participants. When people are working together, when their habits and opinions are more or less in harmony, when they can walk around in their neighborhood without fearing every person they meet, both the individual and the group benefits. This point is worth stressing, since we now know that the human brain is the product of evolution, and therefore we must surmise that universal features of human behavior, such as morality, are adaptive. The fundamental basis for morality is self-interest. What distinguishes moral from immoral behavior is not that the first is unselfish while the other is selfish, but that the first is more intelligently selfish than the second.

It isn’t hard to see how morality is adaptive. One need only consider the basic tenets of game theory. In the short term, to cooperate with others may not be as advantageous as simply exploiting others. Robbery is a quicker way to make money than farming. And indeed, the potentially huge advantages of purely selfish behavior explains why unethical behavior occurs: Sometimes it benefits individuals more to exploit rather than to help one another. Either that, or certain individuals—either from ignorance or desperation—are willing to risk long-term security for short-term gains. Nevertheless, in general moral behaviors tend to be more advantageous, if only because selfish behavior is more risky. All unethical behavior, even if carried on in secret, carries a risk of making enemies; and in the long run, enemies are less useful than friends. The funny thing about altruism is that it’s often more gainful than selfishness.

Thus this account of morality can be harmonized with an evolutionary account of human behavior. But what I find most satisfying about this view of morality is that it allows us to see why we care both about intentions and consequences. Intentions are important in deciding how to punish misconduct because they help determine how an individual is likely to behave in the future. A person who stole something intentionally has demonstrated a willingness to break the code, while a person who took something by accident has only demonstrated absent-mindedness. The first person is therefore more of a risk to the community. Nevertheless, it is seldom possible to prove what somebody intended beyond the shadow of a doubt, which is why it is also necessary to consider the consequences of an action. What is more, carelessness as regards the moral code must be forcibly discouraged, otherwise the code will not function properly. This is why, in certain cases, breaches of conduct must be punished even if they were demonstrably unintentional—to discourage other people in the future from being careless.

Let me pause here to sketch out some more philosophical objections to the Utilitarian and Kantian systems, besides the fact that they don’t adequately explain how we tend to think about morality. Utilitarianism does capture something important when it proclaims that actions should be judged insofar as they further the “greatest possible happiness.” Yet taken by itself this doctrine has some problems. The first is that you never know how something is going to turn out, and even the most concerted efforts to help people sometimes backfire. Should these efforts, made in good faith, be condemned as evil if they don’t succeed? What’s more, Utilitarian ethics can lead to disturbing moral questions. For example, is it morally right to kill somebody if you can use his organs to save five other people? Besides this, if the moral injunction is to work constantly towards the “greatest possible happiness,” then we might even have to condemn simple things like a game of tennis, since two people playing tennis certainly could be doing something more humanitarian with their time and energy.

The Kantian system has the opposite problem in that it stresses good intentions and consistency to an absurd degree. If the essence of immorality is to make an exception of oneself—which covers lying, stealing, and murder—then telling a fib is morally equivalent to murdering somebody in cold blood, since both of those actions equally make exceptions of the perpetrator. This is what results if you overemphasize consistency and utterly disregard consequences. What’s more, intentions are, as I said above, basically impossible to prove—and not only to other people, but also to yourself. Can you prove, beyond a shadow of a doubt, that your intentions were pure yesterday when you accidentally said something rude? How do you know your memory and your introspection can be trusted? However, let me leave off with these objections because I think entirely too much time in philosophy is given over to tweezing apart your enemies’ ideas and not enough to building your own.

Thus, to repeat myself, both consequences and intentions, both happiness and consistency must be a part of any moral theory if it is to capture how we do and must think about ethics. Morality is an adaptation. The capacity for morality has evolved because moral systems benefit both groups and individuals. Morality is rooted in self-interest, but it is an intelligent form of self-interest that recognizes that other people are most useful as allies than as enemies. Morality is neither consistency nor pleasure. Morality is consistency for the sake of pleasure. This is why moral strictures that demand that people devote their every waking hour to helping others or to never make exceptions of themselves are self-defeating, because when a moral system is onerous is isn’t performing its proper function.

But now I must deal with that fateful question: Is morality absolute or relative? At first glance it would seem that my account would put me squarely in the relativist camp, seeing that I point to a community code of conduct. Nevertheless, when it comes to violence I am decidedly a moral absolutist. This is because I think that physical violence can only ever be justified by citing defense. First, to use violence to defend yourself from violent attack is neither moral nor immoral, because at this point the moral code has already broken down. The metaphorical contract has been broken, and you are now in a situation where the you must either fight, run, or be killed. The operant rule is now survival and not morality. For the same reason a whole community may justifiably protect itself from invasion from an enemy force (although capitulating is equally defensible). And lastly violence (in the form of imprisonment) is justified in the case of criminals, for the reasons I discussed above.

What if there are two communities, community A and community B, living next to one another? Both of these communities have their own moral codes which the people abide by. What if a person from community A encounters a person from community B? Is it justifiable for either of them to use violence against the other? After all, each of them is outside the purview of the other’s moral code, since moral codes develop within communities. Well in practice situations like this do commonly result in violence. Whenever Europeans encountered a new community—whether in the Americas or in Africa—the result was typically disastrous for that community. This isn’t simply due to the wickedness of Europeans; it has been a constant throughout history: When different human communities interact, violence is very often the result. And this, by the way, is one of the benefits of globalization. The more people come to think of humanity as one community, the less violence we will experience.

Nevertheless, I think that violence between people from different communities is ultimately immoral, and this is why. To feel it is permissible to kill somebody just because they are not in your group is to consider that person subhuman—as fundamentally different. This is what we now call “Othering,” and it is what underpins racism, sexism, religious bigotry, homophobia, and xenophobia. But of course we now know that it is untrue that other communities, other religions, other races, women, men, or homosexuals or anyone else are “fundamentally” different or in any way subhuman. It is simply incorrect. And I think the recognition that we all belong to one species—with only fairly superficial differences in opinions, customs, rituals, and so on—is the key to moral progress. Moral systems can be said to be comparatively advanced or backward to the extent that they recognize that all humans belong to the same species. In other words, moral systems can be evaluated by looking at how many types of people they include.

This is the reason why it is my firm belief that the world as it exists today—full as it still is with all sorts of violence and prejudice—is morally superior than ever before. Most of us have realized that racism was wrong because it was based on a lie; and the same goes for sexism, homophobia, religious bigotry, and xenophobia. These forms of bias were based on misconceptions; they were not only morally wrong, but factually wrong.

Thus we ought to be tolerant of immorality in the past, for the same reason that we excuse people in the past for being wrong about physics or chemistry. Morality cannot be isolated from knowledge. For a long time, the nature of racial and sexual differences was unknown. Europeans had no experience and thus no understanding of non-Western cultures. All sorts of superstitions and religious injunctions were believed in, to an extent most of us can’t even appreciate now. Before widespread education and the scientific revolution, people based their opinions on tradition rather than evidence. And in just the same way that it is impossible to justly put someone in prison without evidence of their guilt, it impossible to be morally developed if your beliefs are based on misinformation. Africans and women used to be believed to be mentally inferior; homosexuals used to be believed to be possessed by evil spirits. Now we know that there is no evidence for these views, and in fact evidence to the contrary, so we can cast them aside; but earlier generations were not so lucky.

To the extent, therefore, that backward moral systems are based on a lack of knowledge, they must be tolerated. In this why we ought to be tolerant of other cultures and of the past. But to the extent that facts are wilfully disregarded in a moral system, that system can be said to be corrupt. Thus the real missionaries are not the ones who spread religion, but who spread knowledge, for increased understanding of the world allows us develop our morals.

These are my ideas in their essentials. But for the sake of honesty I have to add that the ideas I put forward above have been influenced by my studies in cultural anthropology, as well as my reading of Locke, Hobbes, Hume, Spinoza, Santayana, Ryle, Wittgenstein, and of course by Mill and Kant. I was also influenced by Richard Dawkins’s discussion of Game Theory in his book, The Selfish Gene. Like most third-rate intellectual work, this essay is, for the most part, a muddled hodgepodge of other people’s ideas.

On the Meaning of Life

On the Meaning of Life

What is the meaning of it all? What is the purpose of life, the universe, and everything?

Most thinking people, I suspect, ask themselves this at least once in their life. Some get rather obsessed by it, becoming existentialists or religious enthusiasts. But most of us deal with this question in a more foolproof way: by ignoring it. Indeed, when you’re enjoying yourself, this question—“What is the meaning of life?”—seems rather silly. It is usually when we feel depressed, anxious, frightened, nervous, or vulnerable that it arises to our minds, often with tremendous force.

I do not wish to delve too deeply into dubious psychoanalyzing as regards the motivation for asking this question. But it is worthwhile noting down why we so persistently ask it—or at least, the reasons why I have asked it. Most obviously, it is a response to the awareness of our own mortality. We are all going to die someday; our whole existence will come to an end; and this is terrifying. We can attempt to comfort ourselves with the thought that we will be remembered or that our children (if we have any) will perpetuate our line. Yet this is an empty form of immortality, not only because we aren’t around to appreciate it, but also because, however long our memory or our descendents last, they too will come to an end. All of humanity will end one day; that’s certain.

The famous “Death of God” (the decline of religion) in western history caused a similar crisis. If there was no God directing the universe and ordaining what is right and what is wrong; if there was no afterlife but only a black emptiness waiting for us—what was the point? Nihilism seemed to many to be inescapable. Existentialism grew up in this environment, which inherited many of the assumptions of Christianity while (for the most part) rejecting God Himself, which led to not a few tortured, tangled systems of thought that attempted to reconcile atheism with some of our more traditional assumptions about right and wrong and what it means to live a meaningful life.

I had fallen into this same trap by asking myself the question: “If everything will end someday and humans are only a small part of the universe, what is the point?” This question is very revealing, for it exposes some of the assumptions that, upon further reflection, don’t hold water. First, why is something more worthwhile if it lasts longer? Why do we need to imagine an eternal God and an eternal afterlife to feel secure in our meaningfulness? Do people who live to eighty have more of a meaningful life than those who make it to thirty? Put this way, it seems to be a rather dubious assumption. For my part, I can’t figure out what permanence has to do with meaning. And by the way, I also don’t think that the opposite idea—that life is meaningful because it is temporary—is more useful, even though it is a poetic sentiment.

I think all this talk about permanence and impermanence does not get to the essence of the word “meaning.” What is more, it is my opinion that, once we properly analyze this word “meaning,” we will see that this fateful question—“What is the meaning of life?”—will vanish before our eyes. And this is not because life has no meaning, but because the question is based on a false premise.

To begin, let us figure out what the word “meaningful” actually means. To do this, take something that we can all agree has meaning: language. Language is in fact the paragon of meaningfulness; it is a symbolic system by which we communicate. If words and sentences had no meaning, you would have no idea what I’m saying right now. But where does the meaning of a sentence lie? This is the question.

To answer this question, let me ask another: If every human perished in a cataclysmic event, would any of the writing that we left behind have meaning? Would the libraries and book stores, the shop signs and magazines, the instruction manuals and wine labels—would they have meaning? I think they would not.

We don’t even have to engage in a hypothetical here. Consider the Indus Script, a form of writing developed in ancient India that has yet to be deciphered. Researchers are now in the process of figuring out how to read the stone tablets. How should they go about doing so? They can weigh each of the tablets to figure out their mass; they can measure the average height and thickness of the lines; they can perform a chemical analysis. Would that help? Of course not. And this is for the obvious reason that the meaning of a tablet is not a physical property of an object. Rather, the meaning of the script lies wholly in our ability to respond appropriately to it. The meaning of the words exists in our experience of the tablets and our behavior related to the tablets, and is not a property of the tablets themselves.

I must pause here to address a philosophical pickle. It is an interesting debate whether the meaning of language exists in the minds of language-users (e.g. meaning is psychological) or in the behavior of language-users (e.g. meaning is social). This dichotomy might also be expressed by asking whether meaning is private or public. For my part, I think that there is a continuum of meaningfulness from private thoughts to public behavior, and in any case the question is immaterial to the argument of this essay. What matters is that meaning is a property of human experience. Meaning is not a property of objects, but is a property of how humans experience, think about, and behave toward objects. That’s the important point.

The reason the Indus Script is meaningless to us is therefore because it doesn’t elicit from us any consistent pattern of thoughts or actions. (Okay, well that’s not entirely true, since we do consistently think about and treat the tablets as if they were ancient artifacts bearing a mysterious script, but you get my point.)

By contrast, many things besides language do elicit from us a consistent pattern of thoughts and actions. Most people, for example, tend to respond to and think about chairs in a characteristic way. This is why we say that we know what chairs are for. The social purpose of chairs is what defines them—not their height, weight, design, material, or any other property of the chairs themselves—and this social purpose exists in us, in our behaviors and thoughts. If everyone on earth were brainwashed and told to think about these same objects as weapons instead of for sitting then chairs would have a different meaning for us.

Ultimately, I think that meaning is just an interpretation of our senses. A camera pointed at a chair will record the same light waves that are being emitted from the chair as I will; but only I will interpret this data to mean chair. You might even say that meaning is what a camera or any other recording device fails to record, since the devices can only record physical properties. Thus meaning, in the sense that I’m using the word, depends on an interpreting mind. Meaning exists for us.

I hope I’m not belaboring this point, but it seems to be worth a little belaboring since it is precisely this point people forget when they ask “What is the meaning of life?” Assuming that most people mean “human life” when they ask this question, then we are led to the conclusion that this question is unanswerable. Human life itself—as a biological fact—has no meaning, since no fact in itself has meaning. In itself, “human life” has no point in the same way that the moon or saw dust has no point. But our experience of human life certainly does. In fact, by definition the human experience comprises every conceivable meaning. All experience is one endless tapestry of significance.

I see this keyboard below my fingers and understand what it is for; I see a chair to my right and I understand its purpose. I see a candle flickering in front of me and I find it pretty and I like its smell. Every single one of these little experiences is brimming with meaning. In fact, I would go further. I think it is simply impossible for an intelligent creature to have a single experience that doesn’t have meaning. Every time you look at something and you understand what it is, the experience is shot through with meaning. Every time you find something interesting, pretty, repulsive, curious, frightening, attractive, these judgments are the very stuff of meaning. Every time you hear a sentence or a musical phrase, every time you enjoy a sunset or find something tasty—the whole fabric of your life, every second you experience, is inevitably meaningful.

This brings me to an important moral point. Humans are the locus of meaning. Our conscious experience is where meaning resides. Consciousness is not simply a reflection of the world, but an interpretation of the world; and interpretations are not the sorts of things that can be right or wrong. Interpretations can only be popular and unpopular.

For example, if you “misunderstand” a sentence, this only means that most people would tend to disagree with you about it. In the case of language, which is a necessarily strict system, we tend to say that you are “wrong” if your interpretation is unpopular, because unless people respond to words and sentences very consistently language can’t perform its proper function. “Proper” meaning is therefore enforced by language users; but the meaning is not inherent in the words and sentences themselves. But in the example of a very abstract painting, then we tend not to care so much whether people interpret the painting in the same way, since the painting is meant to illicit aesthetic sensations and not transmit specific information. (In practice, this is all we mean by the terms “objective” and “subjective”—namely, that the former is used for things most people agree on while the latter is used for things that many people disagree on. Phrased another way, objective meanings are those to which people respond consistently, while subjective meanings are those to which people respond inconsistently.)

This is why meaning is inescapably personal, since experience is personal. Nobody can interpret your experience but yourself. It’s simply impossible. Thus conscious individuals cannot be given a purpose from the “outside” in the same way that, for example, a chair can. The purpose of chairs is simply how we behave toward and think about chairs; it is a meaning imposed by us onto a certain class of objects. But this process does not work if we try to impose a meaning onto a conscious being, since that being experiences their own meaning. If, for example, everybody in the world treated a man as if his purpose were to be a comedian, and he thought his purpose was to be a painter, he wouldn’t be wrong. His interpretation of his own life might be unpopular, but it can never be incorrect.

Human life, either individually or in general, cannot be given a value. You cannot measure the worth of a life in money, friends, fame, goodness, or anything else. Valuations are only valid in a community of individuals who treat them as such. Money, for example, is only effective currency because that’s how we behave towards it. Money has value, not in itself, but for us. But a person does not only have value in the eyes of their community, but in their own eyes, and this value cannot be overridden or delegitimized. And since your experience is, by definition, the only thing you experience, if you experience yourself as valuable nobody else’s opinion can contradict that. A person despised by all the world is not worthless if she still respects herself.

In principle (though not in practice) meaning is not democratic. If everybody in the world but one thought that the point of life was to be good, and a single person thought that the point of life was to be happy, there would be no way to prove that this person was wrong. It is true, in practice people whose interpretations of the world differ from those of their community are usually put into line by an exercise of power. An Inquisition might, for example, prosecute and torture everybody that disagrees with them. Either this, or a particular interpretation imposes itself because, if an individual chooses to think differently, then they are unable to function in the community. Thus if I behaved towards money as if it was tissue paper, my resultant poverty would make me question this interpretation pretty quickly. But it’s important to remember that a king’s opinion of coleslaw isn’t worth any more than a cook’s, and even though everyone thinks dollar bills are valuable it doesn’t change the fact that they’re made of cotton. Power and practicality do not equal truth.

Thus we find that human life doesn’t have meaning, but human experience does; and this meaning changes from individual to individual, from moment to moment. This meaning has nothing to do with whether life is permanent or impermanent. It exists now. It has nothing to do with whether humans are the center of the universe or only a small part of it. The meaning exists for us. We don’t need to be the center of a divine plan to have meaningful lives. Nor is nihilism justified, since the fact that we are small and temporary creatures does not undermine our experience. Consider: every chair will eventually be destroyed. Yet we don’t agonize about the point of making chairs, since it isn’t important whether the chairs are part of a divine plan or will remain forever; the chairs are part of our plan and are useful now. Replace “chairs” with “our lives” and you’ve hit the truth.

You might say now that I’m missing the point entirely. I am interpreting the word “meaning” too generally, in the sense that I am including any kind of conscious interpretation or significance, explicit or implicit, public or private. When most people ask about the meaning of life, they mean something “higher,” something more profound, more noble, more deep. Fair enough.

Of course I can’t hope to solve this problem for you. But I will say that, since meaning resides in experience, and since all experience is personal, you cannot hope to solve the meaning of all human life. The best you can hope for is to find meaning—“higher” meaning—in your own experience. In fact, it is simply presumptuous and absurd to say “This is the meaning of human life,” since you can’t very well crawl into another person’s head and interpret their experiences for them, much less crawl into the heads of all of humanity. And in fact you should be happy for this, I think, because it means that your value can never be adequately measured by another person and that any exterior criterion that someone attempts to apply to you cannot delegitimize your own experience. But also remember that the same also applies to your attempts to measure others.

I will also add, just as my personal advice, that when you realize that meaning only exists in the present moment, since meaning only exists in your experience, much of the existential angst will disappear. Find the significance and beauty of what’s in front of your eyes. Life is only a succession of moments, and the more moments you appreciate the more you’ll get out of life. Don’t worry about how you measure up against any external standard, whether it be wealth, fame, respectability, love, or anything else; the meaning of your own life resides in you. And the meaning of your life not one thing, but the ever-changing flux of experience that comprises your reality.