A dad’s eye view of a daughter’s senior prom

She bought a $400 dress, or more accurately, her mom and I bought her a $400 dress, and before she even had a date. But there was never really any doubt she’d get a date. In her clique of friends, everyone always gets a date, whether it’s to the prom or to the homecoming dance or to the Sadie Hawkins dance or to any other big and official event comprising their social calendar. The identity of the date means little. They are arranged through friends and are between casual friends or acquaintances. Romance hardly matters. It is the rarest of things when a couple attends an event on the social calendar because they actually feel some romantic attraction for each other. So not only was there little danger she wouldn’t get a date, there was practically no chance the $400 dress would end up washed away in a tidal wave of teenage hormones. So at least she might be able to wear it again, even if I know that’s not likely to ever happen.  What else are social calendar events for except buying new dresses?

The dress was poppy-colored, like her Mom’s. She’d seen the old pictures. Maybe she was trying to emulate, and one-up, her Mom. But she doesn’t know anything of her Mom’s experiences at the prom except what the official prom photos tell, and they tell nothing. I know something of her experiences (I was her date), but I’m not telling. But I don’t know my daughter’s motivations. I don’t, or didn’t, even know that “poppy” is another way to say bright pink. Or, why my daughter knows the color of a poppy bloom. Isn’t that the bloom that provides the world with heroin? The drug Keith Richards called the most aptly named in the world, because she is a real bitch? Does my daughter have a fucking poppy garden tucked away somewhere?

The kids at my daughter’s high school generally pair off in their prom dates with people who are their opposite-sex friends, people for whom they think they feel some emotional, rather than sexual attraction, a category of friendship which, as Tom Hank’s character explained in Sleepless in Seattle, does not actually exist. The kids’ confusion is a function of them not knowing their own minds, which is understandable, after all they are just baby adults, the enfants terribles of the reproducing caste. It’s not that two people can’t be emotionally attracted to each other, but that they can’t be of the opposite sex and be solely attracted emotionally as friends—opposite sex relationships of any significance always have a sexual component. Or, perhaps I should say, opposite gender relationships with two heterosexual people, one of whom identifies as a male and the other as a female, cannot be purely platonic. The attraction always resolves to something sexual. From where would the emotional attraction otherwise arise? Out of the platonic blue? When it comes to (heterosexual) men and women, there is no such place. Most (heterosexual) men think (heterosexual) women are batshit crazy and only begrudgingly ever engage them in conversation. Most women think men are assholes, perhaps because of what they correctly believe about what men think of them. Being from what almost seems two different species, neither trusts the other. All that can ever be hoped when the two nonetheless come together because of the desire for sex–the most powerful of attractants for H. sapiens because it must be else the species would surely and quickly die out–is that some sort of uneasy truce arises, with skirmishes relegated to only the periphery—the periphery of the sex, not the friendship. The friendship really doesn’t matter, and never did. It was always about the sex.

The kids my daughter hangs with save their rendezvous with those they find sexually attractive for casual hookups, not events on the official social calendar. These casual affairs usually involve alcohol, or some other drug (perhaps a poppy derivative?) because shortly after kids become self-conscious about their bodies with arrival of puberty and the development of mature sexual organs (after the fall, if you will, when they begin wearing fig leaves to cover their nakedness), they figure out that drugs like booze, weed, coke, meth, dope, etc., have the marvelous capacity to impair their inhibitions. Tequila, as the song goes, makes her clothes fall off. Events on the official social calendar are all about inhibitions remaining firmly stuck in place (or so it goes for these children of affluence who are ever mindful of status—the official social calendar wasn’t so stuffy for me back in the day). So the only way they can scratch the itch is in clandestine meetings in illicit places while under the influence. One of the favorite hangouts of my daughter’s group is an old landfill where they build a bonfire and pass the bottle around like a bunch of hobos to screw up their courage, and then wander off to the cars two-by-two.

I know all this because my daughter isn’t afraid of telling me most anything. I won’t say she tells me everything, but she tells me a lot—more than I really want to know. I think she feels safe doing so because I’ve never tried to pretend I was an angel at her age. I’ve told her that I drank, smoked weed occasionally, and got as much sex as a seventeen/eighteen year old boy could rustle up. I had as good a time as I could get away with. And isn’t that the whole point of life, after all?

The prom festivities began on a Saturday afternoon at 4:30 pm, when the daughter’s prom group was to meet at the idyllic campus (ivy covered walls and all) of a local university where they would present themselves to their parents for pictures, because what’s the point of having kids if you can’t photograph their every breath? (The prom goers had split themselves into groups of ten or fifteen couples whose membership was usually established along female clique lines, and not male pack lines, presumably because females are more catty and bigoted towards non-clique members, particularly if the woman not belonging to the clique is a stranger and is pretty. Males are more readily accepting of other males, at least when they aren’t seen as rivals for females, and since the males would only belong to the group if they were going with a female of the clique, there would be less tension among them. Besides, if there were ever a day when male packs got to overtly decide much of anything in H. sapiens society, that day is long past.) It turns out that all the prom groups in the high school were doing the same thing. So by the time they all arrived by around 5:00 pm, there were about 250 kids dressed to the nines in formal evening wear, milling around the campus of a local university (which had not given permission for being invaded in such a manner, but apparently had no objections, probably considering it good public relations with potential future customers), with their parents following them around with cell phone cameras, and even a few camera-only devices (the mono-function things that are just called cameras, if I correctly recall), having them pose so they could take pictures. Thus was the whole reason for the prom fulfilled.

Or, at least that’s what the mother of one of the girls in my daughter’s clique said. While her husband followed (or stalked, your preference) the kids around, clicking photos with his real camera that had a big, zoomy lens and a strap from where it might occasionally just dangle from his neck instead of being grasped and at the ready as it was the whole time I was there, she confided in me that she hated all these picture-taking ceremonies—that she thought it ridiculous that nothing ever happens and someone doesn’t take a picture of it and post it to the internet. She said that the picture taking had become the event; that no one actually ever did anything. Nobody had the time to actually live life; they were too busy photographing it. All people did anymore was pose at doing things so they could record it on digital film, the ultimate end of which is as yet to be ascertained. Basically she said the world was full of phonies and posers. Holden Caulfield couldn’t have said it better.

And I agreed. From the fake motorcycle gangsters hilariously parodied in the movie Wild Hogs starring Tim Allen, et al; to the guys riding bicycles in a contrived peloton on a busy thoroughfare through my neighborhood, each pretending to be Lance Armstrong, who was himself a fraud; to the very existence of a thing called a selfie stick; to the whole life and times of the Kardashians, and especially the Bruce Jenner subplot of the pointless tale; to every other faux reality show out there, each proving in its own way, like an unhappy family, that people are like electrons— the act of observing them changes their behavior; to the faux outrage generated by racialist hustlers over black men being gunned down in the streets when cops are doing the killing while ignoring the multitudes more who are killed by their own; to the petite bourgeoisie bureaucrat and technocrat parents of these prom children who thought that a picture imbues an event with meaning, getting things exactly bass ackward for the umpteenth time in their lives, and etc., etc., etc., ad nauseam: She was right. It is a nation of posers and phonies. There aren’t any real events or real people anymore; there are only events contrived to make good pictures, “good” meaning anything that casts the subject in a favorable light. Authenticity died with Facebook and iPhones and Instagram. Lives are contrived to fit this or that perceptual slot in the public psyche, meaning reality distorting media like Facebook and iPhones and Instagram deform lives to the shapes that will fit them. Social media taught people to lie, and continuously, and hammered home its necessity. So yes, I agreed, the whole thing was a vapid, vacuous fraud. But while doing so, I sort of wondered how well she got along with her husband, as he strode around the premises, clicking that camera’s shutter at everything he could coax to stillness for a moment or two. He was all in. But neither she nor me were in at all. We were revolutionaries, kindred spirits. I maybe woulda found a friend, had she not been of the opposite sex. And married. Even if her reproductive years were well behind her. That’s the irony of the Sleepless in Seattle rule. It has everything to do with sex but almost nothing to do with reproduction, so it doesn’t matter whether a couple would be reproductively viable. They’re still can’t be any deep emotional attachment between heterosexual men and women that doesn’t turn sexual, at least in the imaginations of one of them.

Actually, the festivities didn’t really start with the picture taking. They were officially kicked off the prior Friday afternoon, when all the girls in my daughter’s clique checked out of school early to get their nails done. Yes, you read that right. They checked out of school to get their nails done. There is a place in my brain that makes it somehow hard for me to imagine, even though I know it happened and have now written about it, that I let my daughter check out of school so she could get her nails done for a prom that wasn’t even until the next evening of a day in which she had nothing better to do than her nails. What will be left for her to look forward to if she ever gets married? How much more pampered could an entitled American daughter possibly be? Extravagances like these make me long for hard times, for something more severe even than the Great Depression. I want these pampered little princesses my daughter runs with, and her included, to experience times so hard that maybe they have to one day choose, like women in Greece during the Nazi occupation did, which child to feed and which to let die. When I see the contented confidence with which these American brats regard the future, I want times so bad that anarchy reigns, that there is rape, pillage and plunder on a scale to exceed the collapse of the Western Roman Empire fifteen hundred years ago, when Roman women didn’t contemplate whether their being drunk prevented their ability to consent to sex, but instead contemplated whether having been gang-raped while they were stone-cold sober by bands of roving barbarians meant they should commit suicide. In short, I want these narcissistic, selfish, entitled little bitches to learn that though theirs might be a rarified and exalted existence today, everything can change on a dime, so they ought appreciate it.

But I know it’s terribly gauche to think such thoughts. And probably pointless, too. It’s highly doubtful they’ll get their comeuppance in my lifetime. As outrageously easy as they have things now, I doubt they have quite reached the pinnacle of pamperedness. But their rarified status is not a birthright except in the imagination of the majority culture in which they were born. Other cultures beg to differ, sometimes violently. And the universe doesn’t care.

Yet, hope is the antidote for my despair. I hope for my daughter and her friends that one day American women won’t be so unfortunate to as live such banal and purposeless existences as to think that checking out of school to get someone else to paint their fingernails is something that is their birthright and obligation.

I must admit, my apocalyptic vision of what would benefit these women didn’t arise fully-formed on the day I let my daughter check out of school to get her nails done. It has been simmering in the kettle of my subconscious for some time. Nor for the first time did it gurgle into consciousness a couple of weeks before the prom, when my daughter came home with the news that we owed her a new car and a dog to compensate her for us having so audaciously sold the house where she grew up. Really, who does she think she is? That’s silly. She knows who she is. She’s an American Woman. I never had any doubt that the song of the same name by the Guess Who, a band from Canada who presumably could see things a bit more objectively than others from the States, had things about right. “American Woman…stay away from me”. Indeed. Thankfully, the daughter’s going off to college later this year. Now I know why college is so expensive.   The college administrators have come to the realization that people will pay almost anything for someone to take their teenaged kids, especially their teenaged daughters, off their hands.

And no, I don’t feel even a twinge of guilt for feeling this way. For one, it’s perfectly natural to develop a healthy loathing for kids as they approach some semblance of adulthood. Parental/offspring violence at the time when the offspring are ready for independence is one of the most common sorts of violence among mammals (if generally not the most deadly). Besides, I learned a long time ago what neuroscientists and philosophers and theologians are only coming around to now—the mind can’t control what the heart feels. It can only, at best, explain why the heart is feeling as it is. The mind need not be controlled by the feelings—the ability to have an impulse and not act upon it is a uniquely human attribute. I figure pretty much everyone feels this way to some extent or another, but that only the rare few are brave enough to admit or act upon it. Perhaps some don’t mention them because they want to make them go away by ignoring them. It never works. The way to make a feeling go away is to accept it into the heart and mind and experience it. The requisite understanding when doing so is that it is perfectly natural for parents to feel this way about their young adult children when it’s time for them to leave. There’s a reason God makes teenagers generally so unlovable to parents. And for me personally, at the conflux of where my feelings lie for my daughter about now is how utterly loathsome I find the American culture she is somewhat being forced to embrace as she prepares to leave.

My new kindred spirit and me (the woman who can’t be a friend because of the Sleepless in Seattle rule) also discussed how the logic of every event on a child’s social calendar must now be followed to its illogical, extravagant end. If a night at a sit-down restaurant would be something nice for kids to have done at prom fifty years ago, why not a night at a dining club sitting atop the highest building in town at well over $200 a head for dinner? If a limo to drive the kids to the prom and afterwards was nice twenty-five years ago, why not a bus to ferry whole groups of them so that the journey becomes a part of the party? And if we get them a bus, then why not rent them a lake house where they can continue with an all-night after party that lasts well into the next day (duly chaperoned, of course)? The prom is no longer just a night on the official social calendar. It is a weekend gala celebration.

The absurdity of showering all these resources upon a bunch of dumbass teenage girls would not have been complete without some drama, which was supplied by the refusal of the boy’s parents’ who paid for the bus and the lake house to allow any of the parents of the females to come along as chaperones to the after-party.   A tizzy developed, oddly enough led by one the parents of the boys, the one who happened to be my daughter’s date (probably some sort of behind the scenes power struggle to which I was not privy), over whether thirteen virginal (maybe) young women should be sent to a party solely hosted by a group of young men and their dads (the moms were to be at a separate lake house). It did seem kind of creepy. And I did experience pangs of what it meant to have the responsibility, as dad’s once did, of controlling sexual access to their daughters.

It is said that with great power comes great responsibility. I say that the cliché has causation backward—great responsibility requires great power. If I had lived in a time when I was responsible for delimiting sexual access to my daughter (which was more or less the case in most cultures until the modern era), it would have been necessary for me to have had great power of control over her because without it, I could not have met my responsibilities as a dad. No dad so burdened would have allowed their daughter to go to spend the night at a lake house with a bunch of boys and their dads, not even if their daughter was in the company of twelve other girls. I saw in the brief kerfuffle why in cultures where dads still suffer the burden of controlling access to their daughter’s sexuality, the daughters are kept covered up and away from boys and men as much as possible. It’s just easier to control things that way. The penalty for a dad failing in his responsibilities is family shame and the likely obligation of having another mouth to permanently feed because the daughter can’t be married off. I can also see where dads would want to blame it on the daughter if she has illicit sex, whether forcibly or not (yielding the honor killings that Western sensibilities so abhor). The daughter’s sexuality is not nearly as valuable as a marriage commodity when it is known she has had sex, and the dad is the one who has to deal with the consequences of her besmirched reputation, whatever its source.

I am a laisse faire, and fairly lazy, sort of guy, so I’m glad that I don’t suffer the responsibility of controlling access to my daughter’s vagina. Besides in the American culture, it would be a responsibility impossible to fulfill. It would just make me a blame sink more than I already am. The same cultural forces in the West that effectively relegated the family to irrelevancy over the last hundred years would make controlling sexual access to a daughter nigh well impossible for a dad. Even as the imagery of a father with a shotgun sitting on a porch waiting on his daughter’s suitor still resonates, especially among country music fans (if the lyrics of the songs are any indication), that world, like most of the world that country music sings about, is long past. Now the responsibility for a young woman’s sexuality is delegated to the young woman herself. With the responsibility comes the power. And there is no power and responsibility greater than that of the womb. So the world is basically now the charge of teenaged women, which probably goes a long way to explaining why young American women are so contumaciously impossible to live with. It’s not clear that any of these developments should be construed as progress, except perhaps in a selfish way, for me. Since my daughter has initial and final say on her sexual life, I don’t have the burden of culling suitors for suitability, or really even offering an opinion on the matter. Society goes to hell, but I get off scot free.

I observed to my platonic new kindred spirit (though I must admit that she started looking prettier and prettier somehow while we talked—unintentional proof that the Sleepless in Seattle rule is robust) that the utter banality of American society, along with the emasculation of its men, particularly in their roles as husband and father, might explain why young men and women in this country and other similarly benighted countries are romanticizing and even volunteering for ISIS and other fundamentalist sects that are fighting against what they see as a morally bankrupt West. She disagreed. Or, perhaps, she had never considered it. It is perhaps something of a radical thought to imagine that a bit of moral bankruptcy in mainstream society might yield a romantic fervor in youth to fight to against it, the costs be damned.

But I think it perfectly plausible. From where might a life of meaning be derived when the sum total of social developments over the last ten years can be summarized in three proper names: Apple, Facebook and Google? Especially since all that the three (and, of course, there are others) have done is make the culture phonier and more banal, magnifying the very existential angst they are in some measure in the market to relieve?

Hell, if I weren’t fifty-two years old, I might volunteer to fight myself (in fact, maybe because I’m fifty-two I ought to volunteer to fight for one side or the other—I’ve never understood why we waste our youth on the battlefields while old men sit idly by, directing the effort and waving the flag. Ants are smarter. The oldest members of the ant colony end their days defending it). Even if I wouldn’t go so far as to enlist in ISIS, I certainly would never have agreed in the Cold War era to serve in the US Army protecting and defending the Constitution of the United States had I known that the next thing we’d do after we ripped down the Iron Curtain and supposedly made the world safe for democracy would be to fight a tin-pan-alley dictator for his oil, in the process getting us embroiled in a never-ending war in a place we’ve got no business to be.

I can understand, even if I can’t condone, Westerners who volunteer to fight for ISIS, or for that matter, against it. There really is no better way to banish existential angst than the prospect of death. And the spirit of H. sapiens, particularly the male portion of it, needs a good fight because it is built for a good fight. And I can see where women follow along because they have always been romantically inclined to favor fighting men. They don’t want emasculated men any more than the men want to be emasculated.

Alas, none of those kids going promming last weekend are likely to ever fight for anything. They’re so immersed in American culture they don’t even know it’s a toilet bowl full of shit in which they’re swimming. They don’t need to keep repeating to themselves “This is water” like Dory in Finding Nemo; instead it should be, “This is sewage…this is sewage.” They’ve grown up as heirs to the great American Imperial Fortune, in a worse way even than the Baby Boomers, and believe it their birthright to leave school to get their nails done, just because.   They sneer at the Vietnamese women who primp and pamper them in the nail salon, thinking themselves superior to lowly immigrants who can’t even speak English. They just don’t get (or simply refuse to consider) that only a blink of the historical eye has passed since their ancestors were also lowly migrants, scraping to eke out an existence in this sometimes harsh but always potentially bountiful land. (Neither do they get that the manicurists can speak the language well enough to understand their disparaging comments and attitudes—it is because the customers don’t understand Vietnamese that the manicurists speak it among themselves). It’s doubtful any of these American brats will realize and appreciate their good fortune without which it is taken from them. I’d say it couldn’t come too soon.

Should the State of Iowa send Henry Rayhons to prison for life for having had sex with his dying wife?

Over three hundred years ago, Rene Descartes observed that we know we exist because we are aware of the fact we are pondering the question. But he took pains to point out—that as much is all we can know for sure. We can’t be certain the whole rest of the world, including all the people in it, aren’t just a phantasmagorically grand illusion (even as he didn’t think as much was true, because why would God be so cruel?). I think, therefore I am; that is all I can know for sure.

Modern science has partially proved and extended this Cartesian reductionism, empirically showing that we are only ever aware of a slight proportion of what our minds are consumed with thinking. Consciousness comprises only a small part of our neural activity. We know we exist because we can think about existing, but know almost nothing of anything else we might be thinking outside of our conscious thoughts, and our subconscious thoughts comprise the vast majority of our thinking.   It hardly needs remarking that if we know almost nothing of what we are thinking, we can hardly be expected to know what others think, or if in fact others actually exist.   Yet an assumption that we have the ability to ascertain the thoughts of others forms the foundation of sexual assault law and jurisprudence as it has developed over the last half century in the West.

Sexual activity is okay, or not, depending on the consent, of the people engaging in it. And each party must guess at the other party’s consent. If either party is deemed to have ignored the lack of willingness of the other and has proceeded to engage them sexually, then today’s jurisprudence provides that a sexual assault of some sort occurred.   Failing to successfully read another person’s mind in the boudoir or bedroom can yield a life sentence. And let’s not be naïve.   The mind-reading which must take place is that which a man must do of a woman. Men who can’t read the female mind, the vast majority of men one can reasonably presume, risk life and liberty when engaging a woman sexually.

And it is not just casual affairs in which men risk all just for the temporary bliss of intercourse. Even married men who seek relations with their wives—women who had presumably consented to sex with them and only them when they agreed to the marriage—must gain their assent if they are to protect themselves from charges of rape or sexual assault, as Henry Rayhons found out when he was charged with sexual assault for allegedly having sex with his wife.

Henry Rayhons is a 78 year old Iowa farmer and state legislator. After his longtime first wife died, he took up with a widower, Donna Lou. They were each in their early seventies when they married. After a few years together that were by all accounts blissful and loving, Donna Lou was diagnosed with dementia/Alzheimer’s/senility (or whatever the name de jour is for the mental deficiencies that arise wth old age).

Eventually Mrs. Rayhons had to be admitted to a nursing home for full-time care. Her daughters from her first husband made the arrangements. Mr. Rayhon did not like the meddling of the daughters, but went along with them. (I would speculate that one or both of the daughters had Mrs. Rayhons’ power of attorney, not unusual for the adult children of people who remarry late in life, otherwise he could have ignored their suggestions). A part of the care plan devised by the staff at the nursing home and the daughters (but not Mr. Rayhons) provided that Donna Lou was incapable of consenting to sex. (Why exactly the staff needed to worry over whether a nearly 78 year old woman had sex or not doesn’t seem to confound anyone but me. My guess is that it wasn’t the nursing home staff but the daughters who demanded the acknowledgement of her incapacity for consent.) Mr. Rayhons knew of the treatment plan, and maybe (the evidence is sketchy) had sex with his wife anyway. There is no evidence that she objected, or for that matter, even knew it happened. That’s sort of what being mentally incapacitated is all about. But there is no question that she had consented to sex with Mr. Rayhons many times before. They were, after all, married. For all anyone knows, what little part of Mrs. Rayhons’ conscious mind remained was eager for the physical closeness that accompanies sex (which, really, is all that sex for a woman of her age is good for anyway). It could just as well be imagined that Mrs. Rayhons didn’t recognize her husband, and for all she knew was having sex with a stranger. There is no way to know exactly what she was thinking, and not just because such things are philosophically unknowable. At some point a dementia patient won’t even meet Descartes reductionist view that thinking is all we know for sure of being.

Two weeks after the alleged incident, Mrs. Rayhons died. A week later, Mr. Rayhons was indicted for the ordinary litany of nonconsensual sexual contact crimes. The State of Iowa was Johnny-on-the-spot, moving quickly to protect its citizens from this sexually ravenous beast.

Less than two hundred years ago, wives were considered the husband’s property, and not just in the Antebellum South, but also in supposed havens of progressivity like New York. The notion that a husband could rape his wife was so absurd as to not even be within the realm of contemplation. When does property have the right to object to its use? And hadn’t the wife anyway pledged her consent to having sex with her husband as part of the marital transaction?

And ‘transaction’ aptly describes what a marriage was all about. A father owned his daughter until he gave her to her husband. Her sexuality was a valuable commodity, her womb, an asset of the family to be bartered and bargained away for advantage (there are historically some cultural differences as to whether a daughter was overall perceived as a liability or an asset, but all cultures, until recently, have recognized the value of sexual access to fertile females, and rarely left it to a young woman to decide for herself how to use it). Until around the turn of the eighteenth century in the US, wives were practically treated as the husband’s chattel, as his personal property, to do with as he pleased. (As an aside, it is quite remarkable how women romanticize the Antebellum South—their status then was roughly tantamount to that of a moderately well-regarded plantation slave—but such is the nature of the female heart. The popularity of both Gone with the Wind and Fifty Shades of Grey among women is neither unrelated each to the other, nor anomalous. )

The Talmudic Hebrew culture was among the first to afford wives and women a significant, if mainly unofficial status, as a partner, not a property, in marriage. While men could divorce their wives without their assent and wives did not have a reciprocal right, according to Everyman’s Talmud, Hebrew society ensured the inequities that might inhere with such rules were otherwise ameliorated.   That it took so long for western societies to come around to the idea of affording women and wives an equitable footing with men speaks to how overblown is our sense of progress, if nothing else. And to the reality that in the US, the state was initially very weak while the family, customarily headed by the husband, was quite strong. Wives were property when the state wasn’t strong enough to make them full-fledged citizens.

By the fin de siècle, women in the US were rapidly becoming the Talmudic equivalent of men. By World War Two, women had been exercising the right to vote for decades. And only a few more decades later, after the war’s end, Roe v. Wade came along, representing the idea that a woman’s womb was hers to do with as she wished. This was progress of a sorts, even as it destroyed in less than a century thousands of years of the means with which culture had settled sexual relationships between males and females of the H. sapiens species.

Except in the Soviet Union and a few other Eastern bloc countries, it wasn’t until well after the War, roughly about the time of Roe v. Wade, that the West finally recognized the crime of marital rape (Communist Eastern Europe, having stronger and more focused state apparatuses, recognized the crime much earlier—as early as the 1920’s in the Soviet Union). It stood to reason that if a woman had final say over her sexuality as an unmarried woman, that she didn’t give up the right to refuse sex just because she had voluntarily entered a covenant to love, honor and respect, even one whose main purpose was the establishment of an exclusive sexual relationship.

But the right to refuse sex in the marital relationship was of only dubious value. If the husband forced himself on the wife, an accusation of rape was tantamount to a suit for divorce. The recognition of the crime of marital rape by the state was only relevant to crumbling, abusive relationships that the wife wanted to dispense with anyway. Even in Talmudic Hebrew culture women had ways to escape abusive, unhappy marriages. Where we might see progress, others with a more objective perspective of history would see a cyclical rediscovery of well-tempered wisdom.

But poor Henry Rayhons. The new rights afforded to women that extended to women as wives were never meant to destroy a life and the memory of a marriage well lived.   His wife could not have consented to sex in the same way as it is generally assumed a woman might consent to sex. But she wasn’t a drunk college girl getting ravished by horny frat boys either. She had consented to sex with Henry when she married him. She couldn’t have withdrawn or affirmed her consent. She was mentally incapable of doing either.

Hard cases make bad law. Rayhons’ is a hard case. He is not the sort of rapist the law has in mind when it provides a cause of action against forced sex, not even in the context of marital rape. He wasn’t an abusive husband. He just did (maybe) what he and his dying wife had done before as an expression of their love. It certainly had no reproductive repercussions, the main point to established rape law and jurisprudence.

The truth of the matter is that Henry wouldn’t be in the dock except that his new wife had existing daughters who did not care what was the law’s proper intent, but were intent on punishing someone they saw as an interloper in their relationship with their mother. So Henry now has to face the utterly ridiculous prospect that his life will end in ignominy, a convicted rapist for having maybe made love to his dying wife.

Progress, even in the expansion of women’s rights, is not an unmitigated good. Read more about the details here.

 

Will an Apple Watch make you happy?

“Here we go again”, I thought, when my college junior son announced he “needed” a new guitar because of his unpaid gig playing in a praise band at the church he attends while at school. (Yes, there are college kids who don’t drink or do drugs or have casual sex, and who regularly attend church and are involved in campus religious organizations. While I wasn’t that college kid when I was in school, that’s my boy, the one who’s had two bone marrow transplants. It’d seem to me he might be mad at God, for all the hell God’s put him through. But he’s sanguine in his faith. I think he figures God picked him because He knew he was one of the few who could handle it. I sure as hell couldn’t have.) The boy could not possibly need a new guitar. He has about five guitars, a banjo, an ukulele, and a trombone (from his marching band days). Of the guitars, he has two straight acoustics, one straight electric, and two acoustic/electric’s. One of the latter two is a Yamaha that is more acoustic than electric and the other is a Gibson Les Paul that is more electric than acoustic.

I got the Gibson for him shortly after his being diagnosed a second time with leukemia. At $700, it was a pretty penny to pay for a guitar for an amateur guitarist who would probably be dead within the year. I did it out of guilt, feeling bad for having been so audacious as to think that having a kid would be the one thing I did that didn’t turn out disastrously, as if this would be the one instance when I didn’t have the reverse Midas touch (no, I don’t generally feel this way, but you have a kid who is stricken with leukemia the second time, and see how that makes you feel about whatever else in life you may have accomplished). I knew the boy loved the process of acquisition, and offered to let him pick out the new guitar to give him something better to do than mull over his fate. His acquisitive soul settled on the most expensive guitar he figured he could get away with wanting (of course). He knew how bad I felt about the leukemia coming back. His calculus was spot on. I bought the expensive guitar for him as penance for my sins, though I doubt it worked, as I still felt as guilty as ever, and for what exactly, I didn’t really know. Five years later, and the guitar has practically never been taken out of its also-quite-expensive case. Yet it is practically identical to the one he says he now wants for playing with the praise band.

I know my son. It’s the getting that keeps him going, not the having. At least while he was shopping for the guitar, it managed to focus his mind on things other than his leukemia. And with this latest obsession acquisition, it is the getting, not the prospect of having, that is driving him forward. He’s always happiest when he’s figuring a way to get something. Cleaning out his room lately in preparing for a pending move, the reality of his acquisitive impulses were hammered home through the outsized garbage bags overflowing with the products of his successful forays that had to be carted out, first for a garage sale and later for the dump. He feverishly acquired according to whatever fad fancied his mind, so there are collections of Pokemon cards, Beanie Babies, Legos and much more.

My son is not unique in what might seem a quirk of human nature—that happiness is not to be found in the achievement or the acquisition, but in the yearning. Most people have an idea in their mind of some idyllic future place where they will finally find peaceful bliss.   But the human psyche is not designed for peaceful bliss. It is designed to impel us forward, to keep us continually striving, no matter how much is achieved or acquired. If it needs to dangle the carrot of peaceful bliss in front of our noses to keep us striving, that’s exactly what it will do. If it needs to inculcate the idea of an eternal life of happiness in heaven as our reward for continually fighting a battle against entropy that we know we will ultimately lose here on earth, then it will do that, too. The human mind was created to keep us alive long enough that the genes we carry make it into the next generation. It will allow brief interludes of pleasant feelings we sometimes call happiness, as the reward for achievement of some goal or objective, but it has no interest in allowing any particular achievement to quell the ceaseless striving that it evolved as a forager to believe was necessary for survival.

There is no off switch to the yearning impulse, just a pause button. Apple Computer figured this out long ago, and cynically exploits it with each new product launch or upgrade. And to be fair to Apple, the whole of capitalist endeavors directed at the consuming public depend on this principle of continuous striving. In every consumer market, from car companies, to homebuilders, to that fancy new restaurant down the street that serves hardly identifiable yet lyrically-described food, capitalists exploit the yearning impulse.

What humans in fully-developed, wealthy societies yearn for the most these days is status (survival being already more or less secure). The yearning for status sells cars, boats, motorcycles, cell phones, kitchen appliances, paintings, spirits (wine and beer and vodka and tequila and bourbon and scotch), televisions, vacations, jewelry, clothes (especially women’s clothes and shoes), the services of manicurists and tanning salons, etc., ad nauseam. 2/3rds of the economy is consumer spending. Probably 2/3rds or more of consumer spending nowadays goes towards achieving or protecting status. Even when a consumer good is purchased for its utility, more, sometimes vastly more, is paid for the product if it has a status-enhancing quality (e.g., Apple’s cell phones versus Nokia’s). The only economic sectors not overly concerned with selling status are producers of commoditized goods and services that are not sold in end markets. Big agriculture, mining, transportation and delivery—these are not goods which can be directly employed to enhance a consumer’s status among consumers. They therefore sport some of the lowest profit margins in the economy. A bushel of corn or a barrel of oil doesn’t fetch a premium for how its ornamental display around the neck or on the feet might enhance the wearer’s status. They’re just bushels of corn or barrels of oil.

There are really two (and perhaps more) sides to the question of whether it is possible to buy some happiness with the purchase of an Apple Watch. Buying the watch to satisfy the yearning impulse as I’ve been describing might buy a transitory period of contentedness that will fade something like gravity, by the inverse square root of the distance, the further away in time was the purchase. So from the perspective of the yearning impulse, the answer is “yes”, qualified by the notion that the happiness will be quite fleeting, after which another yearning will arise in roughly the same mental space to take the place the Apple Watch previously occupied. Don’t worry though, Apple is well aware of the inverse square rule of contentedness, and will time its improved Apple Watch for release at about the time the contentedness from buying the first one will have just fizzled out.

But is there a deeper, less transitory happiness that might be achieved through owning an Apple Watch? It seems unlikely. Several reviewers of the watch pointed out that it is intended to operate as something of an extension of the iPhone, so that the user finds liberation from their cell phone by strapping an Apple Watch around their wrist. This is a dubious claim. First, the watch has much too small a screen for anyone who isn’t an avian predator to be able read and manipulate it on a routine basis.   Second, the watch will likely do exactly the opposite of liberation, tethering its owners ever more tightly to their electronic devices (surely Apple’s intended outcome), as the Apple Watch doesn’t do much except in tandem with the iPhone. Where before there was one electronic device, now there will be two.

And it is a readily observable truth of human nature that people don’t change who aren’t interested in change. The Apple Watch won’t simplify anyone’s life who doesn’t want their life simplified; for people who are “addicted to their iPhone” as one reviewer described himself while raving over the watch’s simplifying potential, the watch will most likely simply substitute one addiction for another.

In short, except for a very brief period just after purchase that quickly fades away, buying an Apple Watch won’t buy happiness.

Which begs the question: Is there any material thing whose purchase or acquisition can bring happiness? Economists know that people get less unhappy as their income increases past that required for subsistence, but only to a point (which was, a few years ago deemed to be about $40,000 per year for a family of four). Achieve subsistence and just a bit more, and you will have eliminated as much unhappiness from your life as is possible.   Eliminating unhappiness is requisite to enjoying happiness, and there is some happiness to be achieved just by eliminating things that make us unhappy, but again, the happiness thus achieved is transitory. More income (past subsistence and then some) can yield even a bit more happiness, but the law of diminishing marginal returns makes income-generated happiness like heroin. People who become addicted to ever-increasing income levels find that it takes bigger and bigger increases to get the same high.

Ignoring the status enhancement that comes with having the latest technologies, does owning any labor-saving and/or communications devices hold any promise of happiness through increased efficiency, i.e., through less expenditure of effort to achieve the same ends? Remember my striving son? It wasn’t the guitar that he was really after, it was the striving to acquire the guitar that he really sought (even if he kidded himself otherwise—a necessary bit of self-deception if the objective is to be achieved). How much happier might he have been had his objective been harder to achieve?

And so it goes with so many of the labor and time saving devices that modern man considers necessities. The human body has been magnificently constructed to spend its days searching for food, and even when it is not hungry, because hunger was never more than a few days away no matter how successful was the latest hunt. The yearning impulse never abates, it is only sometimes paused. But with his belly full and his mind barely taxed, modern man has little for which to yearn.  Ten thousand years of technological development, from irrigated agriculture in the Tigris and Euphrates River valleys, to chatting with strangers across the globe in real time, as life got easier, achieving happiness got harder. We spend our lives thinking we will find happiness in the satisfied glow of achievement or acquisition, but have been tricked by our neural hardware. What really makes us happy is the striving.

At about $400 a pop for the cheapest Apple Watch, acquiring one won’t be so challenging to most people in the West as to provide much happiness in the striving. And as Apple well knows, the contented glow of ownership quickly fades.

So no, buying an Apple Watch won’t much make you happy. In fact, you weren’t made to achieve happiness. You were made to strive for happiness, because that kept you striving to survive and propagate. But once you have overcome every challenge, accomplished every goal, achieved every objective, that you thought stood in the way of you and happiness (I’ve finally got an Apple Watch!), you still won’t have found, as the U2 song goes, “…what you’re looking for.” Because you had what you were looking for when you were looking for it. And what you were looking for was lost as soon as you thought you found it.

The Apple Watch: Reviewing the Reviews

No, I have not been provided an advance copy of the Apple Watch.  I’m not a pseudo-publicist, i.e., a tech journalist, who Apple graced with an advance copy. I regret to admit that I can’t transmit fresh, first-hand information about its delights to the huddled masses of Applytes (my word for Apple acolytes) who have been patiently waiting, waiting, waiting, since the death of their patron saint Steve Jobs, for Apple to develop a new product.  The thin gruel of having to rely on iPhone updates and iPad tweaks has left them thirsting for anything new from Apple. They’re now to the point of needing a splashy new product launch in a new category in order that they might affirm their status as pioneers in the consumption of hip new consumer electronics. The Apple Watch, as utterly unnecessary and productivity impeding as it surely will be (features, not bugs, in the minds of the Applytes), will doubtlessly quell their thirst, and the angst it generated, at least for awhile.

Apple is a so-so technology company but genius when it comes to understanding the wetware of human beings.  Like a coy virgin teasing her bridegroom with glimpses of flesh before the wedding day, after the announcement heralding the watch’s arrival a few months ago, Apple revealed only enough of the watch’s details to pique the public’s interest and keep it continually smoldering just on the edge of public consciousness. Now that the launch is only weeks away (April 24th, according to Apple), it has bestowed upon a select few technology journalists the great honor of providing them an advance copy of the watch in order that they might issue glowing reports on the watch to the wistful, longing public. Ostensibly, of course, their reports should be paragons of journalistic objectivity, because that’s how journalism works, no?   But take a moment to consider who picks the journalists and how important to a technology journalist’s career it is that they be among those picked by Apple to review a new Apple product. Apple, after all, is the most valuable company, tech or otherwise, the world has ever seen, which says a lot about the frivolities of the age, but not so much about the historical importance of Apple. It’s hard to be historically important in a mainly frivolous and irrelevant age, except perhaps as a good example of the tenor of the times in question.

But given the journalistic circumstances, what are the odds there will be any bad reviews? It’s as if Apple hired its own official parade observers to ensure all parade goers are on board in admiring the naked emperor’s lavish clothing.

As a public service aimed at reading between the journalistic lines (writers sometimes have subtle means of revealing their true inclinations) I will review three reviews, those from the Wall Street Journal, The New York Times and Bloomberg, my three main sources of daily news. I figure these three are good, general purpose news organizations that have reasonably competent tech writers. Let’s start with the Wall Street Journal.

The headline of the Wall Street Journal’s review, written by Joanna Stern, tells it all:

What the Apple Watch Does Best: Make You Look Good.

And then, to drive the point home, the subheading:

Apple isn’t just selling some wrist-worn computer, it’s selling good looks and coolness, too.

Well, what more do you need to know?   Especially if good looks and coolness matter to you. Do you seriously expect me to believe that good looks and coolness don’t matter to you? Of course, though the headline implies the watch is good looking and cool, it doesn’t explicitly say so; it just says that’s what Apple is selling. Not to worry, Stern raves about how the artifact appears on her wrist, and the article is peppered throughout with pictures of the artifact on, presumably, her wrist.

Personally, I think the watch looks about as good as an early-days Casio, one of those boxy whiz-bang electronic gizmos that came out somewhere around the early eighties (if the mists of time haven’t too terribly occluded my memory). Casio watches were for geeky types who hung out at Radio Shack on the weekends. And a bit for greasy disco wannabes in Members Only jackets (remember those?). I haven’t a clue as to how Casio managed to appeal to both of those disparate groups.  The Apple Watch is boxy and electronic-looking, nothing like a classy Rolex or even Seiko or Timex (the ones with a traditional face). But Stern raves:

Like many Apple products of the past decade, the watch is a status symbol, a sign of wealth and taste. But unlike a MacBook or an iPhone, this Apple product works to help you look—and feel—good.

I sought a simpler experience, turning it into a stylish watch to keep me on schedule and a workout companion to keep me moving.

I know what you’re thinking: Can’t I just buy a $150 fitness tracker for that? Sure, but it might end up in a drawer. The Apple Watch succeeds where the fitness trackers have failed. Not only does it provide more accurate data and a platform with big promise, but it’s an accessory I love to wear all day long.

And then we find, buried in the adulations, a few drawbacks, even when using the watch as a very expensive fitness tracker:

Ideally, the watch would automatically kill off notifications during workouts so your arm doesn’t vibrate so much; in reality, you need to put it on Do Not Disturb mode, which requires too much futzing. Even getting to the Exercise app is a challenge, it being one of many tiny circular icons on the watch’s app screen that makes me wish my fingers were the size of toothpicks.

There are other frustrations: Why is there an Exercise app on the watch, but the data lives in the iPhone’s Activity app? Why must I click “save” to keep a record of a workout? And why can’t the watch’s battery make it past 10 p.m. on days that I exercise?

Fingers the size of toothpicks? Batteries that won’t make it past 10:00 pm? Too much futzing required to shut the damn thing off so that you can do a workout in peace?

My, my. The emperor’s fleshy thighs seem to be peeking through, and perhaps his pasty hindquarters, too. Incidentally, in the case you weren’t aware, people have been doing workouts for centuries without the need to have up-to-the-second feedback on the beatings per minute of the heart or their respiratory rate or their caloric burn. If you think all that nonsense is required, then you are a) seriously narcissistic; b) obsessive-compulsive; c) both, or d) less interested in getting a workout than you are interested in flaunting a hip new electronic gadget. But then, in this age of selfies and social media, you are most likely all of the above.

By the end of Ms. Stern’s review, her true feelings, quite contrary to those which were headlined, managed somehow to slip the editor’s hatchet:

But the prompts to stand up every hour got downright annoying. I don’t stand enough, I know, but I don’t plan to change that in the middle of a meeting, or after I’ve burned 300 calories at SoulCycle. (I did leap out of my seat…when I found out how to turn the stupid prompts off.)

In the end, she advises that people shouldn’t buy the watch, because it’s not as good as it inevitably will eventually be, a sentiment playing to the hearts of those Applytes, if in a backhanded way, as Apple represents for Applytes the idea that human progress unfailingly marches ever upward and onward. She knows the Applytes are gonna buy the watch no matter what she advises, and is just hedging her bets with everyone else in case it turns out to be a marketplace dud.

Over on Bloomberg, Joshua Toposlky’s review starts out with a headline that’s about as ambiguous and confused as the Wall Street Journal’s was celebratory:

Apple Watch Review: You’ll Want One, But You Don’t Need One: The Company has succeeded in making the world’s best smartwatch.

This begs the question: If the Company has succeeded in making the world’s best smartwatch, then how could it be that there is anyone alive who doesn’t need one? Unless perhaps the headline is an underhanded compliment, something like pronouncing that GM had built the world’s best moonbeam-powered car when nobody really wants a moonbeam-powered car and wouldn’t know what to do with a moonbeam-powered car (especially during the day and when the moon is new) if they had one.

And it isn’t clear exactly what the headline means by “you’ll want one, but don’t need one”. Is an Apple Watch like a Krispy Kreme doughnut? Is it somehow bad for you, in a delicious sort of way?

Topolsky’s review was a give and take, replete with praise for Apple and the Apple Watch at one moment, while decorously pointing out the watch’s defects in the next. For example, he observes that the watch keeps impeccable time, as it is true to something called Coordinated Universal Time within 50 milliseconds, a feature which can be really cool (that notion again). If you put Mickey Mouse faces (a software option) on all the watches your group of friends own, together you can watch all those legs on Mickey synchronously tap away the seconds of your lives that are passing you by while you marvel at the novelty. But, Topolsky points out, if you don’t tap the screen to get the time, which itself can be aggravating in its uncertain effectiveness, the only way you can find out the time is by an exaggeratedly violent movement of the arm that is likely to rudely, more so than is normally the case when checking time in the presence of others, to scream out that you really wish you could be shut of whomever is in your presence. Thus the Apple Watch has the potential for allowing someone to do more expressively, and by only using its watch features, that for which smart phones have always been useful—shunning the people around you.

Topolsky buried his headline, but not very deeply, putting it in the very first paragraph:

I’m in a meeting with 14 people, in mid-sentence, when I feel a tap-tap-tap on my wrist. I stop talking, tilt my head, and whip my arm aggressively into view to see the source of the agitation. A second later, the small screen on my new Apple Watch beams to life with a very important message for me: Twitter has suggestions for people I should follow. A version of this happens dozens of times throughout the day—for messages, e-mails, activity achievements, tweets, and so much more. Wait a second. Isn’t the promise of the Apple Watch to help me stay in the moment, focused on the people around me and undisturbed by the mesmerizing void of my iPhone? So why do I suddenly feel so distracted?

The cell phone industry, and particularly Apple, as the biggest and baddest of all cell phone providers, abolished the utility of watch wearing, perhaps intentionally, perhaps incidentally, by making cell phones (all of which tell time) necessary accouterments, condensing the function of watches to jewelry that might confer status if made by right company. For people who don’t care about the status-conferring potential of jewelry watches (like me) and who therefore abandoned watch wearing because of having to carry around a clunky clock all the time anyway (like me), Apple now must coax them into believing that wearing a watch AND carrying a cell phone is the way to be, because, as Topolsky observes, the watch isn’t really smart—it’s just quite intricately connected to the smart phone that is. That’s gonna be a hard sell. If I can’t wear a watch to replace the clunky, annoying cell phone in my pocket, then what’s the point? Oh, yeah, I forgot—it’s cool (Topolsky) and makes me look good (Stern).

Topolsky ends by heaping the device with praise, while explaining that he really doesn’t want one:

So Apple has succeeded in its first big task with its watch. It made something that lives up to the company’s reputation as an innovator and raised the bar for a whole new class of devices. Its second task—making me feel that I need this thing on my wrist every day—well, I’m not quite sure it’s there yet. It’s still another screen, another distraction, another way to disconnect, as much as it is the opposite. The Apple Watch is cool, it’s beautiful, it’s powerful, and it’s easy to use. But it’s not essential. Not yet.

The New York Times headline of its review by Farhad Manjoo was also something of a backhanded compliment:

Apple Watch Review: Bliss, but only after a steep learning curve.

Manjoo is far and away the most awestruck of the Apple Watch’s reviewers, but then there may be more than just a correlative relationship (i.e., there might be a causal connection) between his opinion on the watch’s potential and his self-admitted addiction to his iPhone. Anyone who could be described as “addicted” to an existing consumer electronics artifact is perhaps a poor choice for reviewing a new consumer electronic artifact, especially when the two artifacts are made by the same company and are intended to be used synchronously. To Manjoo, the watch is finally a seamless extension of his mind and body. It even opened his door, using a door key app, at his Starwood Hotels room. And it served as his boarding pass. And it bought him groceries (Stern used the watch to buy an iced latte, because, would anyone writing a review of an Apple Watch buy anything but an iced latte with it the first time, after, of course, their Soul-Cycle session?).

Manjoo may have loved the watch, and may have quickly become its dutiful slave, but judging from the comment boards accompanying the articles on both the Wall Street Journal and New York Times websites, not much of anyone else did. My favorite was from “Bob” a New Jersey Patent Attorney:

I would like to see a review from someone who does not regularly get comments about his “addiction to [his] smartphone,” and whose wife isn’t pleasantly surprised that he “seem[s] to be getting lost in [his] phone less than in the past.” It is hardly surprising that a guy who can’t control his impulse to see every update is smitten by a device that allows him to deepen that dependency another level. But how about his wife (and kids, assuming he has them)? How about someone who does not see an addiction to a smartphone as something to make light of? We keep falling all over ourselves to find reasons to “love” these technologies, and seemingly never step back and give them an honest assessment. I mean seriously — opening a hotel door? Paying for things? Presenting a boarding pass? Since when did we need a solution for these things? Who ever had a problem with key cards, credit cards, or a paper boarding pass? I guess you can tell my personal bent on this subject, but let me dispel any notions that it is due to my age or background. I’m 34 years old. I’m an engineering graduate and a patent attorney. My day-to-day life is all about technology, and yet I still don’t understand why people are so tickled by it. Even more so, I don’t understand how people can call these things revolutionary. One of my colleagues has had an Android watch for a while now. You know what does with it? He gets distracted during lunch while the rest of talk face to face.

It is a rare event for me that someone else captures my sentiments exactly, but Bob about says it all. Indeed. When will we ever step back and give these devices an honest assessment? Have social media and smart phones done anything but complicate our lives? How is society so much better now that we can stay constantly connected to some ephemeral idea of it through a four by six, rectangular, half-inch- thick artifact? How will being able to stay connected to it through a small, square, thin piece of metal strapped to our wrist make things any better? Have we finally reached the endpoint of technologies that have the potential for improving the human condition?

Judging by the comments, it may well be that the almost instinctive believe in the value of technological advancement is finally fading. Technological innovation has ever and always provided only bare improvements, and only sometimes, to the human condition; it could be that the Apple Watch launch will reveal that a majority of people have become innovation skeptics, rather than believers. Or it may be that the Apple Watch is just a lousy, useless innovation, and people will see it as much. Or, it may be that all seven or so billion souls on the planet (minus one—me) will be sporting a new wristwatch by the end of the decade.

As you might have by now figured out, I have no desire to purchase, or even test, an Apple Watch. I only have an iPhone because I got my daughter’s hand-me-down, and I don’t use it to connect to the internet. I don’t do Facebook or any other social media. I’m not a Luddite; I just don’t see the benefit to everyone knowing every little thing I do, or to me knowing every little thing everyone else does, and so don’t care to be that confusedly, confoundedly and constantly connected to the world.   Although it’s hard to tell, as I refuse to Tweet a question in 140 characters for an instantaneous response, or garner a thousand Facebook friends to gauge opinion, I’ve got the feeling that fewer and fewer people are enamored of social media, and that growing numbers (like me) downright loath it, so that the market for a device worn on the wrist that practically demands continuous social media connection might be a tad weak. Time will tell, even if an Apple Watch can’t very usefully tell time.

Any thoughts you might have on the subject are welcome.

Executive Summary, March 29-April 4, 2015

It’s another holiday ritual week (doesn’t it seem like yesterday that we were in the midst of holiday jolly with Christmas and New Years?).  It is Holy Week for Christians and Passover for the Jews. Sunday, the 5th is Easter and Friday (today), the 3rd is the first day of Passover (which lasts until the 11th). It also happens that the daughter’s high school had spring break this week. So this is a week redounding with religious and pagan ritual and mythologies.

According to the Christian catechism, last Sunday, Christ entered Jerusalem on the back of a donkey; the crowds lined the streets, waving palm fronds to show their love and appreciation for him. The Christians celebrate the day as Palm Sunday.   As Christ was neither a Pharisee nor a Sadducee (the rabbis and keepers of the Jewish law) and taught an alternative way than the legalism of the elders, this adulation was most upsetting. By Maundy Thursday (meaning the day of commandment, which refers to the command that Christ issued to the disciples at the Last Supper to love one another), Christ knew the gig was up. The Pharisees and Sadducees turned the people against him. According to the gospels, Christ repeatedly predicted he would die and then rise from the dead. The next day, Good Friday, it finally came to pass that he was crucified and died. On Sunday, he was resurrected from the dead.

The day wasn’t always known as “Good” Friday. Until the Middle Ages, it was known as Black Friday, because it was the day that Christ died, and Fridays were anyways considered particularly unlucky. And, of course, no one really knows what day of the week Christ was crucified, but for him to rise from the dead on the third day (Sunday), he had to die on Friday, if Friday is counted as one of the days. And the Christians desperately wanted Christ to rise on Sunday, instead of his resurrection happening on, for example, Saturday, which is the Jewish Sabbath. Christ was a Jew, but the early Christian church preferred to downplay that aspect of his heritage in order to broaden his appeal, sort of like Barack Obama is fully half-white, but only when it garners more votes to acknowledge as much.

Tonight is the first night of Passover for the Jews, when the first, and perhaps only (depending on which sect of Judaism) Passover Seder is held. The Seder, meaning literally “order or arrangement” is a ritual retelling of the Exodus story that God commanded of the Hebrews after having forced the Pharaoh to release them from captivity.  The name, Passover refers to an episode reeked in blood, when the Lord passed over Hebrew households if they had the blood of a yearling lamb smeared over the sides and top of their doorframe, on his way to killing the first-born child and animal of every Egyptian household. We would today consider this genocide.

Passover lasts until April 11th this year. The first two days—from sundown today until sundown two days hence, are, for observant Jews, full-fledged non-working days. So the timing of the holiday this year is fortuitous, as it starts on a weekend, and one in which not much work happens anyway. For seven days, Jews are to eat only unleavened bread with their meals, thus Passover can also be called, as it is in the Bible, The Feast of Unleavened Bread.

The first day of Passover doesn’t always coincide with Good Friday, and the period of celebration doesn’t not always (at least partially) coincide with the Christian Holy Week culminating in Easter Sunday. It just happens that it does this year. But it is not happenstance that the two Judeo-Christian holidays take place at roughly the same time of year. They come at if from different calendars, but for the Christians anyway, arrive at about the same place purposely. The Christians wanted to distinguish their theology from the Jews (and Pagans), but make it enough of the same that it would be attractive for individuals of either group to join as followers and believers. The same could be said of Christmas, at least with regard to the Pagan holidays around the winter solstice.

Spring break, that most Pagan of education-calendar rituals, coincides, like Easter and Passover, with the beginnings of spring and the rebirth of the world under a warming sun.   It’s fitting that all the holidays overlap this year.   They all are derived from the same place in the temperate-climate mammalian heart that rejoices at the warmth and reawakening that comes with the lengthening days and more direct sunlight of spring. But on the education calendar spring break originally had as its justification the need for children to help out on the farm to get the fields ready for planting. Very obviously, that is a quaint bygone. In these easy, post-modern times, it celebration now recalls the Roman Bacchic festival of boisterous and riotous revelry and drunkenness. Towns along Northwest Florida’s Emerald Coast have come to loathe as much as love the Bacchanalian spring break rituals. They sell their souls for the money, and raucousness spring breakers bring.

In economic news, the big number out today—the March payroll report—was a disappointment. From the Bureau of Labor Statistics:

     Nonfarm payroll employment increased by 126,000 in March,

and the unemployment rate was unchanged at 5.5 percent.

Employment continued to trend up in professional and business

services, health care, and retail trade. Job losses continued in

mining.

 

     Incorporating the revisions for January and February, which

reduced nonfarm employment by 69,000, monthly job gains have

averaged 197,000 over the past 3 months. In the 12 months prior

to March, employment growth averaged 269,000 per month.

 

This continues a downward trend that began after the blowout numbers of November of last year, when payrolls grew at a monthly pace of over 400,000.

Here’s what I said about the developing trend in February:

The 257,000 number [in January] is a decline from November and December’s numbers, which were 423,000 and 329,000, respectively, after revisions.   Though it would never be spun in such a manner on the long-only news outlets (Bloomberg, The Wall Street Journal, i.e., basically any mainstream business media), the numbers point to a declining level of gains. From November’s 423,000 to December’s 329,000 is a drop of 94,000, or about a 22% reduction in monthly gains. From December’s 329,000 to January’s 257,000 is a drop of 72,000, also a 22% reduction in additional jobs.   If the trend of dropping about 22% each month continues, the gains to employment in February will barely tip 200,000, which is a perfectly meaningless observation to make, except that it does a fine job of helping flip the idea that January’s numbers were great on its ear. Employment gains will turn negative by the end of the year if current trends continue.

In February the trend reversed (temporarily?), and payroll gains increased from January’s 257,000 to 295,000. With the numbers declining to only 126,000 last month, the downward trend has returned and accelerated, dropping 57% in a month. From the details of the report, most of the declining growth can be attributed to the mining industry (oil and gas extraction and services pertaining thereto), which lost jobs again this month. Most other industries saw declining growth, but none actually lost jobs.

What does all this mean? It could be that the cyclical stage of this expansion peaked around the end of 2014. Quite a few other economic metrics are coming in weak, including last month’s automobile industry sales volumes (a big economic bellweather), which had been on a tear. It could be that this is just a hiccup. Or, it could be the start of something more ominous. No one would have predicted in 2007 that a few defaults in subprime residential real estate mortgages would kick off an economic and financial conflagration for the ages. Time will tell.

The only other newsworthy event of this week is the putative nuclear arms deal the US has struck with Iran. The US is gravely concerned that Iran might get the bomb because Iran would surely commence to dropping bombs on US cities just for the hell of it if they were so endowed. It’s not really clear whether Iran wants the bomb, or just wants to negotiate concessions for agreeing not to build one. Iran is rapidly expanding, with the US’s help, its imperial reach in the Middle East. It can now add Iraq and Syria to areas under its hegemonic purview, and the US is helping it defeat ISIS in the Levant, which will  expand its influence even further. In Yemen, it has so rattled Saudi Arabia by its support of the Houthi rebels who recently overthrew the Yemen government that the Saudi’s commenced bombing Yemen, but with plain old conventional bombs.

The first thing to know about the Iran nuclear deal is that it is tentative. It is to be drafted in writing and signed by June 30th, which is an eternity of time when it comes to international relations. Under the agreement, Iran is to give up 2/3rd’s of the weapons-grade uranium it now has, and convert a heavy water reactor so that it is incapable of making plutonium, and submit to inspections of some sort for fifteen years. It would take an expert in nuclear technology who was also expert in international relations to know whether the agreement will actually work to limit Iran’s nuclear ambitions. In return for agreeing to the deal, the US and other world powers agreed to lift economic sanctions, allowing Iran to again sell oil directly on the world markets, which ought to help those US employment numbers even more. What’s not to love about more oil for an already glutted market?

Nobody likes this deal, except President Obama, who championed its negotiation. He apparently sees it as a potential legacy maker for him. But a nuclear-armed Iran is really not any more dangerous to American interests than is a non-nuclear Iran. And, ironically, with US help, plain old conventional-bomb Iran has expanded is sphere of influence dramatically since Obama came to office. It may be up to Hillary to beat back the Persian hordes.

The party that most loathes this deal is, surprise, Benjamin Netanyahu, the Prime Minister of Israel. But the US does not exist to do Israel’s defensive bidding. Israel is not the 51st US state. Israel is an independent sovereign that is ever and always out for what every other independent sovereign is out for—its own interests. An Iran that retains enough bomb-making material to cobble one together in a year, as this deal provides, is no nuclear threat to the US at all. And not legitimately much of one to Israel, either, but the Israelis have grown to believe that they among all the peoples and nations of the world deserve to be allowed to proactively eliminate any and every possible threat they face. Israel has the bomb, but no one hears Lebanon caterwauling about its compromised security as a result.

Regardless of what else happens between the US and Iran and the whole Middle East situation, I sense that the US is heading to war, ultimately with Iran. I hear it from the right and the left, both of which worship heroes that we created in Iraq for no other real purpose than we needed heroes to validate our collective spirit. The US really gets confused about its reason for being without which it is embroiled in some conflict abroad, whereby it is inevitably “defending freedom”, or is engaged in some rights expanding exercise at home (gay marriage, e.g.). The US depends on some notion of progress continually progressing, else the whole thing collapses in on itself. That’s what really makes it “exceptional”, as the neocons like to say. Most other nations are content to just survive and thrive. Not so, the US. It must survive, thrive and conquer. So it relentlessly searches for things to conquer, as surviving and thriving have proved fairly easy.

In the meantime, Happy Easter, Passover, Feast of Unleavened Bread or Spring Break, depending on your religious and secular affiliations!

Book Review: Philosophy in the Flesh—The Embodied Mind and its Challenge to Western Thought, by George Lakoff and Mark Johnson (1999)

Being over fifteen years old, and intricately involved with explaining the philosophical impacts of discoveries about the mind in the field of cognitive science, this book may seem a bit dated. But not really. Its fundamental premise, that there is no mind or soul or spirit or consciousness without which there is a body, and that the body determines the mind, not the other way ‘round, is hardly original to 20th century psycholinguists and cognitive scientists.

Lakoff and Johnson never once mention him in elucidating the embodied mind and its challenge to Western thought, but Baruch D Spinoza (incidentally, a contemporary of Descartes, he of the disembodied mind and Cartesian duality, which Lakoff and Johnson spend hundreds of pages and gallons of ink debunking) came up with this idea of an embodied mind roughly 350 years ago.  In the Scholium to Proposition Thirteen of Part II of Ethics, his magnum opus completed in 1675, Spinoza very clearly explains that there is no mind independent of the body:

From the above [i.e., Proposition Thirteen, which states, ‘The object of the idea constituting the human mind is the body—i.e., a definite mode of extension actually existing and nothing else.’] we understand not only that the human Mind is united to the Body but also what is to be understood by the union of Mind and Body. But nobody can understand this union adequately or distinctly until he first gains adequate knowledge of the nature of our body.

Without making this a treatise on Spinoza, suffice to say that Lakoff and Johnson’s radical “challenge” to Western thought is about 350 years too late. And what’s more—it’s not as good or powerful as Spinoza’s, because Spinoza had to overcome beliefs that were demanded of the culture of his time, with as painful a death as Inquisitors could muster for heretics as the cost of refusing to believe in the duality.

Lakoff and Johnson set up a straw man with the Cartesian idea of duality and then pretended as if they were the first to ever knock it down. They weren’t. There is nothing at all radical about the idea that the mind is part of the body. It is instead a radical notion to believe that it isn’t (hence the necessity for official sanction and public persecution and punishment for those heretics who refused to believe the two are separate). Just as it is a radical notion to believe that one certain living human was God in the flesh who died a human death but was three days later revived in the flesh. The reward for believing these radical notions, according to the Christian catechism, is that one’s disembodied spirit will live in heaven with God until it is reunited with a reconstituted body after the end days, after which it will enjoy eternal life in heavenly bliss with God.

The Christian necessity for a disembodied spirit or soul to depart for heaven to await the end days with God is what drove the silliness of the Cartesians, who even went as far as to identify where the soul or spirit resided in the body in life—the pineal gland, which is a bit ironic in that we today know the pineal gland to be intricately involved with reproduction, which is the actual means to whatever trifling of eternity, or continuation in time and space, is possible for human beings and all other creatures, rather than some disembodied soul that floats away to heaven at death. So, Descartes was at least in the ballpark, if unwittingly perhaps, when he identified our potential for eternal life as residing in the pineal gland.

Lakoff is known for believing that the mind thinks in metaphors, attributing the use of different metaphors to, for example, different political beliefs. He explains that liberals and conservatives are distinguished by the metaphors they use to describe life in political society. Both analogize political governance to metaphors for the family. For conservatives, theirs is a hierarchical Father Knows Best metaphor (Lakoff and Johnson capitalize every positively identified metaphor, as if some great principle were involved in explicating them) in which it is the duty of a strict father (the government) to raise the children (the people) to be responsible adults who should need little supervision, care or help once having proved their responsibility and maturity. For liberals, governance fits the Nurturant Parent metaphor of the family, where the mother and father work to keep the essentially good children free from harm, protected from the potentially corrupting and harmful influences of pollution, poverty, injustice, etc.

There is precious little doubt that humans think in metaphor. Metaphor and analogy is the manner with which the mind categorizes the world in order to make readily accessible and intelligible sense of it. Language is, as Guy Deutscher points out in The Unfolding of Language, built on a reef of dead metaphors. Words start out describing very physical, fully-embodied concepts (to “go” to the barn, an example of the original meaning of “go”, literally means to move one’s physical presence from its present location to the barn). They slowly become metaphorized (my word) to describe all sorts of abstract things (e.g., “I am going to think of an idea for his birthday party”, using a variant of go which doesn’t describe movement of any physical object, but is movement of the will through time). So, Lakoff is undeniably correct that humans think in metaphors, at least so far as their thinking is done through language. There is a vast amount of thinking, however, that takes place outside of conscious, linguistic purview (“outside” in this sentence is an example of a word being used as a metaphor derived from its original meaning of physically being located somewhere else than inside some sort of container, i.e., “out” of the “side”). It is not clear that metaphor animates subconscious thinking, especially since the subconscious thinking that dominates our mind is thinking done closer to the non-metaphorical stuff that matters to our continued survival. For our subconscious thinking, it’s hard to imagine that the mid-morning hunger I’m feeling right now is anything more or less than just that. No metaphors are involved or necessary to categorize the way I feel. But in writing this essay, I am hungry to get at the truths, if any, that are to be found in Lakoff’s opinions. It’s not the same hunger as I feel for lunch, but it’s similar, and that’s the point. Without physically embodied concepts, it would be very difficult to describe our abstract thinking to others. We would still think abstractly, but communicating those thoughts would be much harder, if not impossible.

The problem with Lakoff’s argument that we think in physically-derived metaphor (actually, Lakoff didn’t go so far as to point out that the base activity for all metaphorical activity is physical embodiment as I am here) is that it explains nothing to say that liberals and conservatives use different metaphors to describe the process of governance. Of course they use different metaphors. People think using metaphor. If they used the same metaphor for views of governance that exist at either end of the political spectrum, their ability to understand and distinguish their views would be greatly diminished. But as Steven Pinker noted in The Language Instinct, liberalism or conservatism are heritable traits—the metaphors with which people are thinking about politics are selected more or less by the genes, not by political party affiliation. Yet even knowing that doesn’t explain much. What caused nature to select for genes that in some people yielded a nurturing, trusting and egalitarian perspective of what governance is about, but in others yielded a hierarchical and disciplinarian perspective? Nobody really knows, but that’s the only question worth asking. To answer it by saying some people use this metaphor or another when thinking through their political views is to end up where the inquiry was begun, without actually answering anything. (Incidentally, my guess is that the genetic differences arose because some people lived in environments that favored socially collective and egalitarian cultures, and some people lived in environments that favored more individualistic and hierarchical cultures, which then begs the question—which environments?)

In the end, Philosophy in the Flesh comes across as an intricately (practically Rube Goldberg-esque, which is to say, poorly) argued rationalization for Lakoff and Johnson’s political impulses, which to further detract from its value, is quite poorly written. Lakoff is something of the psycholinguist version of politico-economist Paul Krugman, who uses economics as rationale for his left-wing political impulses when he uses economics at all, rather than as means of discovering truths about the material world.  Except that Krugman writes well and clearly and Lakoff doesn’t. Lakoff’s psycholinguist contemporary, Steven Pinker, has reached very similar conclusions about the nature of the mind and body and therefore at least about the epistemological aspects of philosophy in light of today’s cognitive science, but from a decidedly less political viewpoint (though it is imagined that most of Pinker’s theories find their greatest acceptance among those with right-wing proclivities). It seems Lakoff is to Steven Pinker what Paul Krugman is to Milton Friedman.

Concluding by returning to the first prominent philosopher of the modern (post Renaissance) era who espoused the view that mind and body are inseparable, Baruch Spinoza had also this to say regarding the unity of mind and body. From the Scholium of Part III, Proposition Three, Ethics:

Now surely all these considerations go to show clearly that mental decision on the one hand, and the appetite and physical state of the body on the other hand, are simultaneous in nature; or rather, they are one and the same things which, when considered under the attribute of Thought [mind] and explicated through Thought, we call decision, and when considered under the attribute of Extension [body] and deduced from the laws of motion and rest, we call a physical state.

There really is not much in the way of new ideas in this age about the nature of human beings that wasn’t already deduced by philosophers long ago. Through cognitive science and modern medicine we have learnt a great deal about the intricacies of the human body and the mind inhabiting it, but the big picture remains by and large the same—body and mind exist as a unitary entity devoted to getting the genes for which they are vessel into the next generation. There is no understanding of mankind except that he is understood as an animal that owes his existence, like all others, to evolution by natural selection (a theory which Spinoza adduced, only a couple hundred years before Darwin, in setting out his argument for the nature of God in a beautiful essay comprising the Appendix to Part I of Ethics). Every bit of man, including his magnificently developed brain, has to be evaluated and examined with the clarity of evolutionary impulses in mind, or only confusion can arise.

So the central premise of Philosophy in the Flesh, the notion that mind and body are inextricably connected, is hardly revolutionary, except perhaps to a political shill trying to garner attention for the freshness of his old ideas. Don’t read Lakoff, if you want to better understand the embodied mind and what it means for philosophy. Read Spinoza. Or, Pinker. And read Spinoza anyway. Even accounting for the empirical discoveries of modern cognitive science, he’s still the Michael Jordan of philosophers. Lakoff isn’t. Pinker maybe is the Scottie Pippen.

Goodbye, Homewood

It’s been seventeen long years now, on this corner of Roxbury and Huntington, a place that started as a practical compromise between a couple of places we had to be for work, that almost became a place where the heart could live, until the sentiment died aborning when the first kid turned seven and got sick for the first time. A couple of years of illness that felt like decades later, and it seemed more like a prison, a repository for bad memories, a place where the struggle to survive might well fail. Not a happy place. Definitely not a place to thrive, except in spite of everything. And what sort of thriving is that?

I worked hard after that. So hard. Every day to make money and every night and weekend to make the place a home, if not for me, then for everyone else who lived there, and for maybe some other someone one day who wouldn’t think the whole pitiable existence an albatross, a sign of good fortune that had somehow been destroyed. Then it came back. All that work at making it work, at pleasing the mad gods controlling our fates, proved for naught. The kid got sick again. What had I done that these people around me should deserve this fate, this suffering? The second leukemia was magnitudes worse than the first. I quit hoping . Quit caring. I blamed myself, but I couldn’t help what I’d done. Because I didn’t even know what it was.

And so now, it’s time. The ship has finally reached shore. The last kid is almost out of school. The “good” (code in Alabama for “majority white”) school system’s boot heel is slipping from my neck. The first kid, the leukemic kid, survived a second time, but barely, and left this place to thrive. And now I can leave this place and all those memories bouncing around these old plaster walls. There ain’t many good ones. Leaving won’t lose them, but at least a new place won’t be like a minefield, hair-triggered to set off memory bombs with every step.

Leaving means more than just leaving behind the sadness of an unfortunate life poorly lived. It means getting rid of the hassle. The place was old when it was new. Nothing worked right. The basement flooded every time it rained. The windows were all painted shut. They needed repainting every couple of years just to stay that way, a Panglossian circumstance, the best of all possible worlds. Water dripped from every faucet and drain. Sixty-year-old leaded paint chipped and peeled from every wall and window. The roof leaked. Water pooled in the yard. And it was all mine to maintain and repair. The house was an aging parent to care for, before Social Security and Medicare made the burden bearable. Things constantly broke, and then broke again soon after being repaired. Taking care of the dump on the corner of Roxbury and Huntington was nothing to taking care of a double transplant patient, but still, it weren’t fun. Not a bit.

The neighborhood wasn’t so much a neighborhood as a real estate investment club. People came and went in a blur; it seemed every last person in the greater Birmingham metropolitan area decided that living here would be the perfect thing, for a few years. They all rode the real estate tram to the top of the real estate investment hill until the flimsy infrastructure inevitably failed, leaving a whole carload of passengers stranded. For a few years after the crash, there was stability in the burg. Nobody could leave because nobody had the money to buy their way out. A few years later, it all roared back to life—the banal search for yield. Everyone became again a speculator, day-trading real estate lots. I quit bothering even trying anything more than passing cordiality with the ever-spinning roulette wheel of “neighbors”, more aptly described as bare-fanged venture capitalist wannabes, using other people’s money to fund their real estate fantasies, just like the real ones in Silicon Valley.

The house on the corner is a fishbowl. The three-sided visibility made our business everyone’s business. All the trials and tribulations of the seventeen years were lived out like reality theater for a bunch of people with their faces pressed hard against the glass, peering in, dispensing judgments in murmurs too faint to be clearly heard but too loud to completely ignore. Nothing was off conversational limits. From the new awning on the front porch to the reason our kid had cancer twice, there was nothing but cruel, condescending judgments whispered in the leafy subdivision about the house on the corner of Roxbury and Huntington and the people who lived there (the cancer was our fault, by the way, because it’s always the parent’s fault, because that’s just how the puny human mind works—it has to attribute nefarious effects to human agency in the absence of any real evidence, which for cancer there are practically none).

The corner lot where the fishbowl sits is the neighborhood doggie poop park. The biggest, flattest, best yard in the neighborhood that I bought for my kids to play in hardly ever saw a child frolic or run or kick or swing a bat; it mainly just saw dogs crouching to shit. While the kids played video games or watched the Disney Channel or had bone marrow transplants, the neighborhood dogs had a field day. They were walked to the doggie poop park surrounding the fishbowl, or let free to run, so they could poop in a place their owners didn’t have to clean. The dog neighbors weren’t even worthy of the appellation “frenemies”. There weren’t anything friendly about them. The guy up the street with a Ron Paul sign in his front yard was the worst. He didn’t quite get that the libertarian freedom Paul espoused critically depended on strictly enforced property rights. Libertarians don’t let libertarians let their dogs shit in their yards, so I shouted him off the property, twice, and quite threateningly so the second time. He really has no idea how close he came that second time to grave bodily injury (I had a pistol stuffed in my shorts, just in case), but I didn’t grow up in a place where faux comity was used as a pretense to shit on someone else’s shoes, literally or figuratively.

A bigger fool agreed to buy my little slice of hell on the corner of Roxbury and Huntington, and for almost three times what I paid for it. So there may be some consolation to seventeen years living in a fishbowl on a corner lot used as a doggie poop park. The arc of the moral universe is long but it sometimes bends towards restitution, which isn’t quite justice, but is better than nothing.

And now, I’m moving to the country, to twenty-five acres on the top of Lookout Mountain that I bought just before the financial system collapsed the first time. I’m figuring on building a house right in the middle of the woods overlooking the pasture. It may well be that painted-shut windows are the best of all worlds in the world that I’m leaving. But I am going to a place where I must cultivate my garden.

After seventeen years indentured to a crappy school system, trapped in a fishbowl of a dump on a doggie-poop-park lot at the corner of Roxbury and Huntington, it’s finally time to go. Goodbye, Homewood.

And good riddance.

What post 9-11 cockpit doors and the Patriot Act have in common

If reports on the cause of the crash of the Germanwinds flight in the French Alps are to be believed, the co-pilot of the flight, Andreas Lubwitz, intentionally crashed the plane. Authorities claim that Lubwitz locked the pilot out of the cockpit when he left, and refused his reentry, while also initiating a descent through the pushing of the “descend” button. (Which sounds curious to me, as there was no “descend” button in the helicopters I once flew for the US Army, but admittedly, helicopters are quite different from airliners in myriad ways.)

I’m skeptical that such a radical explanation for what happened could be so quickly and conclusively drawn. I will await the fullness of time before I cast my lot with what appears will be the official narrative, not least to calibrate for the human mind’s instinctive tendency to assign human malevolence as the causative agent for whatever mysterious effects arise. If the earth is getting warmer, then it must be the fault of some latent human evil. If a plane mysteriously crashes into a mountainside or ocean, the pilot must have been suicidal or terroristic or beset by some other character flaw. On the thinnest of evidentiary reeds, speculation always runs to ill intent on the part of human actors. But the plausibility of the narrative in this instance can’t be denied. The cockpit door of airliners, which has been redesigned in the wake of the 9-11 hijacking to secure the good guys and keep out bad guys who might do an airplane full of passengers harm, can readily be used for just the opposite, to keep in the bad guys and keep out the good guys. And whatever is possible eventually comes to pass.

Assume for a moment that the co-pilot did indeed do as is claimed, and intentionally flew the airplane into the ground in order to kill himself and all the passengers and destroy the airplane. The opportunity to do so arose directly out of efforts to prevent that very thing from happening. Much like the Patriot Act passed in the wake of 9-11 providing blanket authorization for the conduct of military operations against al Qaeda and its affiliates to prevent another orgy of death and destruction made death and destruction more likely. The wars and military occupations embarked upon by the United States that successive administrations have argued are authorized by the Patriot Act have directly lead to the rise of ISIS in the Levant and to civil war in Yemen and to the Taliban resurgence in Afghanistan. The cost in lives and treasure of these wars long ago far exceeded the total cost of all the terrorist attacks on US soil. And now it appears the fortifying of the cockpit doors in airliners to prevent the takeover of a plane like happened in 9-11 has caused the very same thing it was designed to prevent.

The US has the reverse Midas touch when it comes to post-Cold War international relations. Everything it does turns shit. Most every effect expressly intended with its efforts have yielded the exactly opposite result. Only the initial war with Iraq, in which the objective was limited to expelling Iraq from Kuwait, succeeded in achieving its goals. But that war was tantamount to shaking the Middle Eastern tar baby with both hands. The more the US subsequently struggled to get free, the more it got stuck the baby’s embrace.

Just today, it was reported in the New York Times that three Iraqi militias had decided on their own initiative to pull out of the battle for Tikrit against ISIS because they don’t believe that American help is necessary or desired. What sort of state is Iraq, when its military units can decide on their own initiative whether or not they will fight? Saddam Hussein, a native of Tikrit, might have ruled with an iron fist, but he wouldn’t have allowed his military to dictate their own rules of engagement. Through American military involvement in the Levant over the last quarter century, the Iraqi state has been reduced to nothing more than a loose coalition of warlords generally allied with Iran whose main difference with ISIS is that it enjoys official US sanction. Iran must be incredulous, watching in amazement as the US expands and protects the frontiers of Iran’s empire.

But it’s all good. The US gets to create a new crop of heroes for the adoration of its cravenly dull population, while at the same time generating the next Middle Eastern conflict in which to become embroiled. Once ISIS is eliminated as a threat to Iranian hegemony, the US will be forced to undo what it did in the Levant through its support of Iran’s allies. It will execute an “about face” and turn its guns eastward, towards Persia. Thus will the US’s existential angst be salved. The country can’t be happy unless it has an enemy to fight. Otherwise, where would Dancing With the Stars get its supply of deformed and disabled humans tugging at the heartstrings by somehow managing to overcome their disabilities and deformities so that they can dance?

Yemen is another American disaster, though a bit more opaquely so than Iraq. The US has been waging a drone war in Yemen ever since it developed the capacity for drone warfare. The existing Yemeni government, the recently toppled one, supported the US effort to weed out its version of al Qaeda. The rest of Yemen was not all that happy with its government’s alliance with the US, a reality that completely failed the braintrust of the US intelligence community, and its chief intelligencer, President Obama, who as recently as last summer touted Yemen’s drone war as an American interventionist success story, something about as likely to exist as a flying unicorn. Now Saudi Arabia and Egypt are allied to fight against the new Yemeni government, which incidentally is backed by Iran. While the US was busy fighting to expand Iran’s imperial reach in the Levant, Iran slipped through the Arabian desert to expand its hegemony a bit more, threatening to encircle Saudi Arabia with its influence by taking over the tip of the southwestern Arabian peninsula. The US, of course, is backing Saudi Arabia and Egypt in the Yemeni operations, so it may well come to pass that the US and Iran are allied on one field of battle (the Levant) and enemy combatants on another field of battle (Yemen), which would have to constitute a flying unicorn of international diplomatic history. Surely nothing as foolish has ever before happened.

But wait, it gets better. Iran is trying to develop a nuclear weapon. The US desperately wishes they wouldn’t, for not readily-obvious reasons, except perhaps that Israel (who already has nuclear weapons) doesn’t want a nuclear Iran to upset a favorable power balance in the Middle East. There have been relentless negotiations that all add up to nothing. Iran is officially under some sort of weak-kneed economic sanctions which it naturally wants lifted but the US wants some assurance that Iran will abandon its nuclear fuel processing development (building a bomb is relatively simple; distilling weapon’s grade uranium is quite difficult). John Bolton, former US Ambassador to the UN, advises that to prevent Iran getting the bomb, Iran must be bombed, preferring that Israel do the job, but willing to allow the US do it if Israel is unwilling. So the scenario could very well play out like this—while the US continues to fight to protect Iranian hegemony in the Levant, it commences battle in Yemen to expel an Iranian ally that had taken power in a coup even as it also conducts deep bombing raids in Iran to destroy its uranium processing capability. It would be impossible to make this up. No spy novelist could get a plot like this past his publisher. It’s just too ridiculous to be believed.

If the crash investigators prove correct, fortified cockpit doors didn’t prevent the takeover of the Germanwings flight. They in fact were critical to the plane’s takeover and intentional crash. Likewise, the Patriot Act hasn’t eliminated terrorism, but the exercise of its extra-constitutional powers have in fact magnified the threat and cost many times more in lives and treasure than terrorists could have dreamed to cause. Something tells me that the battle with ISIS in the Levant and the support of Saudi Arabia in the Yemen civil war and the negotiations with Iran over its nuclear capabilities won’t end well for the US either.

Unless the purpose for US actions in the Middle East have been to foment unrest and opposition such that the US might be indefinitely engaged militarily in the area (not an implausible scenario), nothing it has done has turned out well. Is there any reason to think that its actions battling ISIS or supporting the Saudis in Yemen or deterring Iran’s nuclear ambitions will turn out any better?

Angelina Jolie is at it again

For some reason, the ritual of having cut myself and feeling the pain, maybe feeling alive, feeling some kind of release, it was somehow therapeutic to me.

Angelina Jolie, in an interview with Paula Zahn on June 9th, 2005 on CNN.

She’s at it again. After experiencing the therapeutic release that came with the cutting of her breasts (clinically, bilateral prophylactic mastectomy) for the fear that her breasts might turn cancerous, she’s cutting herself again, this time to remove her ovaries and fallopian tubes (clinically, bilateral prophylactic salpingo-oophorectomy).

Jolie has the “breast cancer gene”. Except there is no such thing. There have been identified a pair of genes, dubbed BRCA 1 and BRCA 2, which when defective, i.e., deleteriously mutated, fail to do their assigned task of DNA repair, thus leading to the likelihood that a strand of DNA might be damaged so that the cell carrying it becomes immortal, i.e., cancerous. Specific mutations of the BRCA 1 and BRCA 2 genes that have been proved to be associated with higher breast and ovarian cancers are inherited. For all the hoopla about the magic of genes and how we will one day treat disease by rewriting DNA code, there have been identified very few gene to cancer correlations. The BRCA 1 and BRCA 2 correlation to breast and ovarian cancers represent two of the few.

Even so, having the gene does not guarantee that cancers of the female reproductive organs will arise (though men who have the BRCA 1 and BRCA 2 mutation get breast cancer at a higher rate than men without them, the rate is still so low as to be negligible). Having the mutation simply means that the probability of being afflicted with those sorts of cancers is significantly increased. The mutant genes therefore do not “cause” breast and ovarian cancers because if they caused breast and ovarian cancers anyone who had them would get the cancers. These aren’t cystic fibrosis or Huntington’s chorea type genetic malfunctions.

Angelina Jolie’s mom died of ovarian cancer, a tragedy of epic proportions in an age when Google executives and Israeli historians are making claims that eternal life, or at least really long life, is an imminent possibility. Jolie’s female self-emasculation is proof positive that Google executives and Israeli historians are premature in their prognostications. Jolie cut herself out of the fear she might die of a cancer she knows she has a predilection for. If eternal, or even very long life, were reliably around the corner, there would be no worry over getting cancer. Whatever therapy had been devised that prevented the body growing old and dying would also necessarily prevent the body suffering from latent diseases like cancer, else there wouldn’t be much point in the expectation of immortality.

Jolie hopes to eliminate the risk of death by female reproductive organ cancer. Even that’s not possible. She could still get cancer in the parts of her reproductive system that must be left behind. And there’s no guarantee that she won’t get cancer of another type. Or get run over by a bus. Death is inevitable; only its timing is uncertain. Besides all that, she is cutting herself this time for the possibility that an uptick in a metric used to monitor patients susceptible to ovarian cancer indicates she might have the disease. But the National Cancer Institute specifically states on its website that there is no reliable test to determine that metastatic ovarian cancer has developed, excepting actual biopsy of an ovarian tumor.

But once the cutting is through, she will undoubtedly feel as full of life as she did way back when she was young and she not only decided that a cutting was indicated but then performed the procedure herself. She at least has one thing right in all this. Life is pain. Without it, there is very little way of knowing it’s real. And what sort of pain could possibly trouble someone with everything except the chance that it might be lost? She might instead try pinching herself like people who aren’t sure they aren’t dreaming do, but that likely wouldn’t suffice.

For me, I like to take inflict pain by taking a good long run. I really know I’m alive somewhere about the eighth or ninth mile. But to each their own.

Book Review: Sapiens; A Brief History of Humankind (2011, Israel; 2015, US)

This is the rarest of books for me. I bought it in hardcover, before it came out in paperback. I am in no mad rush to read the latest books. And even after a lifetime of reading, there’re still constellations of great books I’ve yet to read. Practically all of which can be purchased in convenient and inexpensive paperback form. I actually prefer paperback books for reading. They don’t look as good on the bookshelf, but I buy books to read, not to impress people with what I have displayed in my bookshelf, and the physical act of reading is easier with a paperback. So it was quite unusual that I bought this book in hardback. But then, I used a Christmas gift card, so I didn’t feel so bad about the extra money. And from everything I could tell before purchase, the book would make a really good read of just the sort that I enjoy. So the hardback version of Sapiens was a holiday gift from me to me.

Turns out, I was pretty good to myself with this one. The binding is quite luxurious, with that heavy-stock shiny paper with which coffee table books are bound. But that is the only resemblance to a coffee table book it bears. Unlike the average coffee table tome, and more important than the binding, this is a quite profitable read. It has a few pictures and illustrations interspersed throughout, but not enough that its gist might be discerned by flipping through it like a catalog.

It is not an academically challenging book. The writing is crisp and clear and free of jargon. And while it seems there is no subject too arcane or erudite for Harari to explore regarding the history of H. sapiens, the intellectual byways he travels are readily accessible to the layman. The aptly named chapter titles run the gamut from “A Day in the Life of Adam and Eve” to “The Law of History” to “The Law of Religion”, etc. There are few stones left unturned so far as the saga of mankind’s history and development is concerned.

Harari lumps development into three eras he calls revolutions—the Cognitive Revolution, which he claims kick-started history 70,000 years ago; the Agricultural Revolution which sped it up about 12,000 years ago, and the Scientific Revolution which started about 500 years ago.

What Harari calls the Cognitive Revolution is the time period that started with mankind leaving its East African Eden to disperse, in less than 50,000 years in most cases, throughout the globe. It was during this era that mankind learned to make and use sophisticated tools and became artistic and more linguistically inclined. Harari believes that during this period man learned to think abstractly—as he puts it, to conjure fictitious entities like gods and devils, and to fabricate stories about his and their origins and histories. Like so many others, Harari thinks that the use of the mind to imagine things that don’t exist, and the use of language for communicating ideas both abstract and concrete is what makes H. sapiens who he is. He acknowledges the similarities that H. sapiens has with all the other homo species, but believes language usage and mental capacity distinguishes them.

But Harari’s claims beg a few questions. Why 70,000 years ago? What changed that yielded this explosion in cognitive power 70,000 years ago while modern H. sapiens had been quiescently living in East Africa for at least that long without a historical murmur? It had somehow to be genetic because we know the environment didn’t significantly change. Was it a chance mutation that quickly swept through the small H. sapiens population like an Ebola virus that was opposite in its effect, enhancing their survival and propagation prospects by some mechanism? Nobody really knows. But the notion that a Cognitive Revolution occurred, without even trying to attribute causes to it is something like knowing that Napoleon lost at Waterloo but not knowing why. It’s history’s version of kissing your sister (something which Napoleon might have actually enjoyed, as he was suspiciously close to his sister, if the darker aspects of his history are to be believed).

The idea of an Agricultural Revolution is much more widely embraced because the reason for its development is readily understood—through the domestication of plants and animals, mankind figured out how to make food come to him instead of having to chase after or forage for it. As he gained skill in domestication and cultivation, he was able to produce many more calories than he alone could eat, thus freeing up others of the community to spend time in activities that didn’t involve food production. Thus was society stratified and specialized. There arose ruling classes and intellectual classes (mostly shamans, medicine men and fortune tellers at first) and soldierly classes, etc. Always near the bottom were the producers, the class upon which all the others depended. Thus has society been illogically stratified ever since the first field of wheat was sown.

The Agricultural Revolution is with us yet today, and is directly responsible for what came next, the Scientific Revolution. How so? It took a secure, well-fed, critical mass of thinkers to begin the train of questions that would finally lay bare the myths and lies and superstitions cluttering the human psyche about the nature of the universe during the late Middle Ages. Without agricultural surpluses, the Agricultural Revolution would have not have been a Revolution and the Scientific Revolution would not have been possible.

Harari justifiably points out that the Scientific Revolution was European in origin. It wasn’t in Asia or Africa or the Americas where gods were cast aside for the clarity of reason in explaining the nature of nature. It was driven by European thinkers, which is one reason why Europe came to rule the world within a few hundred years of it’s the revolution’s inception, which is generally today called the Renaissance or Early Enlightenment (the other reason being that Europe wanted to rule the world, whereas, for instance, China and India had little interest in what lie beyond their naturally-bounded empires).

The Agricultural Revolution was not an unmitigated good. Mankind had to fit its hunting and foraging genes into the straitjacket of domesticity in a period of time that was far too short for its genetic code to substantially change to accommodate the new way of life. When mankind domesticated corn and wheat and sunflowers and dogs and goats and sheep, what he really got in the bargain was that he domesticated himself, a fact of which I am constantly reminded anytime I see someone being led around by their dog, at the ready for scooping up the dog’s excrements when it decides to go, wherever, of course, it wants. But it was not only detrimental to man. The Agricultural Revolution devastated biodiversity, killing off vast numbers of species in favor of only those that were useful to H. sapiens. What species extermination the Cognitive Revolution failed to complete as mankind sprawled to every corner of the globe, killing off other human species and large mammals along the way, the Agricultural Revolution often finished.

My main quibble with the insights and prognostications offered by Harari arise from his linear, progressive view of history. A great many people, professional historians and lay people alike, view history as a story about the fulfillment of some purposeful or meaningful aspect of mankind’s journey through time. But so far as we know, this is not a journey and mankind has no purpose beyond simply being, a truth which Harari admits late in the book, even after having spent most of the book pandering to a popular audience who he knows is reading for the purpose of trying to find meaning and purpose in the confusion of existence through the sort of knowledge that reading a book about mankind’s history imparts.

Harari predicts that mankind might one day conquer the biochemical frontiers of the body so effectively that we become what he calls “a-mortal”, not immortal, because stepping in front of a bus will still kill us, but a-mortal, because old age and disease won’t. He thinks this might happen in his own lifetime (He’s about 40). I seriously doubt any such thing happens, mainly because it’s not clear that being a-mortal would be offer any great improvement to the human condition. In fact, as the angst and ennui wrought by the surpluses of the Agricultural Revolution attest, the further removed we become from our core existence as mammalian animals of the homo genus, the less content with our lives we seem to become.

People in the developed world already have little worry over from where their next meal will come. The ease with which food is acquired is as much bane as benefit. Obesity and boredom abound. There is little point to life when its continuation is more or less assured, which is how it has come to pass that the most valuable commodities these days are ways to infuse meaningless moments with purpose and passion. It is no accident that iPhones and football-playing skills are highly-desirable items in the wake of the Agricultural/Scientific Revolutions. When acquiring food is no problem, what other way is there for filling up the meaningless hours of a day except through personal and collective entertainments?

With a-mortal life providing a new layer of existential certainty and thereby meaninglessness, it’s not hard to imagine how utterly despondent life could become. E.O. Wilson pointed out that the central problem of collective human activity is that there is no point to it beyond the immediate continuation of the individual gene-carrying vessel—the body—through time and space. The point of being is being, and a too-easy-to-assure being robs being of its purpose. When the continuation of being is difficult and fraught with uncertainty, every moment is purposeful and full. Ask a soldier in combat, or a mother whose child is dying of starvation, whether their moments are meaningful.

If what Harari predicts comes to pass, and individual human beings are afforded the opportunity live an a-mortal life, it may well signal the end of civilization as it is now known. To be sure, civilization was not designed to accommodate hunter/gatherer genes but has nonetheless managed to survive. But how much less was it designed to accommodate a-mortal human beings? Imagine the severe mismatch between modern life and our ancient -history-besotted genes that would obtain. There is no way the human genome could have prepared for such a contingency. The body is designed to assume that time is always limited, thus the mind knows things no other way. What if it weren’t? The results might not be pretty. Alas, I don’t expect to be losing any sleep over it. Harari seems a bit overly enamored, sort of like the financial markets about now, with the potential of biochemistry to change the essence of the H. sapiens condition.

Harari ends with a poignant observation that if the newly discovered biochemical powers humans now possess are unleashed, never mind a-mortality, humans could be designed from the ground up for genetic superiority. Which implies a genetic-perfection arms race could ensue, not unlike the nuclear arms race of the just-ended era. If genetic perfectionism becomes acceptable, H. sapiens could ultimately go extinct through evolving to a different form. But that’s not so radical an idea. In fact, unless H. sapiens is the lone difference of all the living creatures, it is guaranteed that he will go extinct, either by dying out or by evolving to a new and different species.

Take away a few quibbles like those just explained, and Sapiens is an outstanding book. For the reader who is generally unfamiliar with the history of H. sapiens, from its humble beginnings to its world-altering present, the book should enlighten and entertain. For people with a good foundational knowledge of the basic contours of anthropology, sociology, economics, history, biology, philosophy, etc. (as I believe myself to be), the book offers a good refresher course while simultaneously presenting original insights and perspectives in those fields. As I was reading, I couldn’t help thinking that this is the book I have always wanted to write. Perhaps one day. For the expert in the various fields, Harari does a good job of presenting all the extant and viable theories of how things were or why.

In short, it is a very, very good book. Maybe even a masterpiece. Harari is brilliant and witty and insightful, and it shows on practically every page. Everyone should read it. Don’t wait for the paperback.

Follow

Get every new post delivered to your Inbox.

Join 227 other followers