Thursday, May 17, 2018

Reading New Thoughts: Dolnick’s Seeds Of Life and the Science of Our Disgust


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Michael Dolnick’s “The Seeds of Life” is a fascinating look at how scientists figured out exactly how human reproduction works.  In particular, it shows that until 1875, we still were not sure what fertilized what, and until the 1950s (with the discovery of DNA) how that fertilization led to a fully formed human being.  Here, however, I’d like to consider what Dolnick says about beliefs before scientists began/finished their quest, and those beliefs’ influence on our own thinking.
In a very brief summary, Dolnick says that most cultures guessed that the man’s seminal fluid was “seed”, while the woman was a “field” to be sown (obviously, most if not all those talking about cultural beliefs are male).  Thus, for example, the Old Testament talks about “seed” several times.  An interesting variant was the idea that shortly after fertilization, seminal fluid and menstrual blood combined to “curdle” the embryo like milk curdled into cheese – e.g., in the Talmud and Hindu writings, seminal fluid, being milky, supplied the white parts of the next generation, like bone, while menstrual blood, being red, supplied the red parts, like blood.  In any case, the seed/field belief in some ways casts the woman as inferior – thus, for example, infertility is always the woman’s fault, since the seeds are fine but the field may be “barren” – another term in the Bible.
In Western culture, early Christianity superimposed its own additional concerns, which affected our beliefs not just about how procreation worked and the relative inferiority or superiority of the sexes, but also our notion of “perfection”, both morally and with regard to sex.  The Church and some Protestant successors viewed sex as a serious sin “wanting a purpose [i.e, if not for the purpose of producing babies]”, including sex within marriage.  Moreover, the Church with its heritage in Platonism viewed certain things as less disgusting than others, and I would suggest that many implicit assumptions of our culture derive from these.  The circle is perfect, hence female breasts are beautiful; smooth surfaces and straight lines are beautiful, while body hair, the asymmetrical male member, and the crooked labia are not.  Menstrual blood is messy, “unclean,” and disgusting, as is sticky, messy seminal fluid.

Effects on Sexism


It seems to me that much more can be said about differing cultural attitudes towards men and women based on the seed/field belief.  For one thing, the seed/field metaphor applies in all agricultural societies – and until the early 1900s, most societies were almost completely agricultural rather than hunter/pastoral or industrial.  Thus, this way of viewing women as inferior was dangerously plausible not only to men, but also to women.  In fact, Dolnick records examples in Turkey and Egypt of modern-day women believing in the seed/field theory and therefore women’s inferiority in a key function of life.
Another implication of the seed/field theory is that the “nature” of the resulting children is primarily determined by the male, just as different types of seed yield different plants.  While this is somewhat counteracted by the obvious fact that physically, children tend to favor each parent more or less equally, there is some sense in literature such as the Bible that some seed is “special” – Abraham’s seed will pass down the generations and single out for special attention from God his Jewish descendants.  And that, in turn, can lead naturally to the idea that mingling other men’s seed with yours can interfere with that specialness, hence wives are to be kept away from other men – and that kind of control over the wife leads inevitably to the idea of women as at least partially property.  And finally, the idea of the woman as passive receptacle of the seed can lead to men viewing women actively desiring sex or a woman’s orgasm as indications of mentally-unbalanced “wantonness”, further reinforcing the (male) impression of women’s inferiority.  
I find it not entirely coincidental that the first major movements toward feminism occurred soon after Darwin’s take on evolution (implicitly even-handed between the sexes) and the notion of the sperm and the egg were established as scientifically superior alternatives to Biblical and cultural beliefs.  And I think it is important to realize that, with genetic inheritance via DNA being still in the process of examination and major change, the role of culture rather than “inherence” in male and female is still in the ascendant – as one geneticist put it, we now know that sex is a spectrum, not either-or.  So the ideas of both similarity and “equality” between the sexes are now very much science-based.
But there’s one other possible effect of the seed/field metaphor that I’d like to consider.  Is it possible that the ancients decided that there was only so much seed that a man had, for a lifetime?  And would this explain to some extent the abhorrence of both male masturbation and homosexuality that we see in cultures worldwide?  Think of Onan in the Bible, and his sin of wasting his seed on the barren ground …

Rethinking Disgust


“Girl, Wash Your Face” (by Rachel Hollis, one of the latest examples of the new breed that live their lives in public) is, I think, one of the best self-help books I have seen, although it is aimed very clearly not at men – because many of the things she suggests are perfectly doable and sensible, unlike the many self-help books in which in a competitive world only a few can achieve financial success.  What I also find fascinating about it is the way in which “norms” of sexual roles have changed since the 1950s.  Not only is the author running a successful women’s-lifestyle website with herself as overworking boss, but her marriage is what she views as her vision of Christianity, complete with a positive view of sex primarily on her terms.
What I find particularly interesting is how she faced the age-old question of negotiating sex within marriage.  What she decided was that she was going to learn how to want to have sex as a norm, rather than being passively “don’t care” or disgusted by it.  I view this as an entirely positive approach – it means that both sides in a marriage are on the same page (more or less) with the reassurance that “not now” doesn’t mean “not for a long time” or “no, I don’t like you”.  But the main significance of this is that it means a specific way of overcoming a culture of disgust, about sex among other things.
I believe that the way it works is captured best by poetry by Alexander Pope:  “Vice is a monster of so frightful a mien/As, to be hated, needs but to be seen/Yet, seen oft, familiar with her face/We first endure; then pity; then embrace.”  The point is that it is often not vice that causes the disgust, but rather disgust than causes us to call it vice – as I have suggested above.  And the cure for that disgust is to “see it oft” and become “familiar” with it, knowing that we will eventually move from “enduring” it to having it be normal to “embracing” it. 
Remember, afaik, we can’t help feeling that everything about us is normal or nice, including excrement odor, body odor, messiness, maybe fat, maybe blotches or speech problems – and yet, culturally and perhaps viscerally, the same things about other people disgust us (or the culture tells us they should disgust us).  And therefore, logically, we should be disgusted about ourselves as well – as we often are.  Moreover, in the case of sex, the disgusting can also seem forbidden and hence exciting.  The result, for both sexes, can be a tangled knot of life-long neuroses.  
The path of moving beyond disgust, therefore, can lie simply with learning to view the disgusting as in a sense ours:  the partner’s body odor as our body odor, their fat as our love handles, etc.  But it is “ours” not in the sense of possession, but in the sense of being part of an integral part of an overall person that is now a vital part of your world and, yes, beautiful to you in an every-day sort of way, just as you can’t help think of yourself as beautiful.  This doesn’t happen overnight, but, just as the ability to ignore itches during Zen meditation inevitably happens, so this will happen in its own good time, while you’re not paying attention.
The role of science in general, not just in the case of how babies are made or sex, has typically been to undercut the rationale for our disgust, for our prejudices and for many of our notions of what vice is.  And thus, rethinking our beliefs in the light of science allows us to feel comfort that when we overcome our disgust about something, it is in a good cause; it is not succumbing to a vice.   And maybe, just maybe, we can start to overcome the greatest prejudice of all:  against crooked lines, imperfect circles, and asymmetry.  

Thursday, May 3, 2018

Perhaps Humans Are One Giant Kluge (Genetically Speaking)

Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Over the past year, among other pastimes, I have read several books on the latest developments of genetics and the theory of evolution.  The more I read, the more I feel that my background in programming offers a fresh perspective on these developments – a somewhat different way of looking at evolution. 
Specifically, the way in which we appear to have evolved to become various flavors of the species Homo sapiens sapiens suggests to me that we ascribe far too much purpose to our evolution of various genes.  Much if not most of our genetic material appears to have come from a process similar to what, in the New Hacker’s Dictionary and more generally used computer slang, is called a kluge.
Here’s a brief summary of the definition of kluge in The New Hacker’s Dictionary (Eric Raymond), still my gold standard for wonderful hacker jargon.  Kluge (pronounced kloodj) is “1.  A Rube Goldberg device in hardware/software, … 3.  Something that works for the wrong reason, … 5. A feature that is implemented in a ‘rude’ manner.”  I would add that a kluge is just good enough to handle a particular case, but may include side effects, unnecessary code, bugs in other cases, and/or huge inefficiencies. 

Our Genes as a Computer Program

(Note: most of the genetics assertions in this post are plundered from Adam Rutherford’s “A Brief History of Everyone Who Ever Lived”)
The genes within our genome can be thought of as a computer program in a very peculiar programming language.  The primitives of that language are proteins with abbreviations A, G, C, and T.  The statements of the language, effectively, are of form IF (state in the cell surrounding this gene is x AND gene has value y THEN (trigger sequence of chemical reactions z, which may not change the state within the cell but does change the state of the overall organism).  Two peculiarities:
1.       All these gene “statements” operate in parallel (the same state can trigger several genes).
2.       The program is more or less “firmware” – that is, it can be changed, but over short periods of time it isn’t. 
Obviously, given evolution, the human genome “program” has changed – quite a lot.  The mechanism for this is mutation:  changes in the “state” outside an instance of DNA that physically change A, G, C, T, delete or add genes, or change the order of the genes in one side of the chromosome or the other.  Some of these mutations usually occur within the lifetime of an individual, during the time when cells carry out their programmed imperatives to carry out tasks and subdivide into new cells.  Thus, one type of cancer (we now know) is caused when mutation deletes some genes on one side of the DNA pairing, resulting in deletion of the statement (“once the cell has finished this task, do not subdivide the cell”).  It turns out that some individuals are much less susceptible to this cancer because they have longer chains of “spare genes” on that side of the DNA, so that it takes much longer for a steady statistically-random stream of deletions to result in statement deletion.

Evolution as an Endless Series Of Kluges

Evolution, in our computer-program model, is new (i.e., not already present somewhere in the population of the species) mutations.  The accepted theory of the constraints that determine what new mutations prosper over the long run is natural selection. 
Natural selection has been approximated as “survival of the fittest” – more precisely, survival of genes and gene variants because they are the best adapted to their physical environment, including competitors, predators, mates, and climate, and therefore are most likely to survive long enough to reproduce and out-compete alternative mates.  The sequencing of the human genome (and that of other species) has given us a much better picture of evolution in action as well as human evolution in the recent past.  Applied to the definition of natural selection, it suggests somewhat different conclusions:
·         The typical successful mutation is not the best for the environment, but simply one that is “good enough”.  An ability to distinguish ultraviolet and infrared light, as the mantis shrimp does, is clearly best suited to most environments.  Most other species, including humans, wound up with an inability to see outside the “visible spectrum.”  Likewise, light entering the eye is interpreted at the back of the eye, whereas the front of the eye would be a better idea.
·         Just because a mutation is harmful in a new environment, that does not mean that it will go away entirely.  The gene variant causing sickle-cell anemia is present in 30-50% of the population in much of Africa, the Middle East, the Philippines, and Greece.   Its apparent effect is to allow those who would die early in life from malaria to survive through most of the period when reproduction can happen.  However, indications are that the mutation is not disappearing fast if at all in offspring living in areas not affected by malaria.  In other words, the relative lack of reproductive success for those afflicted by sickle-cell anemia in the new environment is not enough to eradicate it from the population.  In the new environment, the sickle-cell anemia variant is a “bug”; but it’s not enough of a bug for natural selection to operate.
·         The appendix serves no useful purpose in our present environment – it’s just unnecessary code, with appendicitis a potential “side effect”.  There is no indication that the appendix is going away. Nor, despite recent sensationalizing, is red hair, which may be a potential side effect of genes in northern climes having less need for eumelanin to protect against the damaging effects of direct sunlight.
·         Most human traits and diseases, we are finding, are not determined by one mutation in one gene, but rather are the “side effects” of many genes.  For example, to the extent that autism is heritable (and remembering that autism is a spectrum of symptoms and therefore may be multiple diseases), no one gene has been shown to explain more than a fraction of the heritable part.
In other words, evolution seems more like a series of kludges:
·         It has resulted in a highly complex set of code, in which it is very hard to determine which gene-variant “statement” is responsible for what;
·         Compared to a set of genes designed from the start to result in the same traits, it is a “rude” implementation (inefficient and with lots of side-effects), much like a program consisting mostly of patches;
·          It appears to involve a lot of bugs.  For example, one estimate is that there have been at least 160,000 new human mutations in the last 5,000 years, and about 18% of these appear to be increases in inefficiency or potentially harmful – but not, it seems, harmful enough to trigger natural selection.

Variations in Human Intelligence and the Genetic Kluge

The notion of evolution as a series of kluges resulting in one giant kluge – us – has, I believe, an interesting application to debates about the effect of genes vs. culture (nature vs. nurture) on “intelligence” as measured imperfectly by IQ tests. 
Tests on nature vs. nurture have not yet shown the percentage of each involved in intelligence variation (Rutherford says only that variations from gene variance “are significant”).  A 2013 survey of experts at a conference shows that the majority think 0-40% of intelligence variation is caused by gene variation, the rest by “culture”.  However, the question that has caused debate is how much of that gene variance is variance between individuals in the overall human population and how much is variance between groups – typically, so-called “races” – each with its own different “average intelligence.” 
I am not going to touch on the sordid history of race profiling at this point, although I am convinced it is what makes proponents of the “race” theory blind to recent evidence to the contrary.  Rather, I’m going to conservatively follow up the chain of logic that suggests group gene variance is more important than individual variance. 
We have apparently done some testing of gene variance between groups.  The second-largest variance is apparently between Africans (not African-Americans) and everyone else – but the striking feature is how very little difference (compared to overall gene variation in humans) that distinction involves.  The same process has been carried out to isolate even smaller amounts of variance, and East Asians and Europeans/Middle East show up in the top 6, but Jews, Hispanics, and Native Americans don’t show up in the top 7. 
What this means is that, unless intelligence is affected by one or only a few genes falling in those “group variance” categories, most of the genetic variance is overwhelmingly likely to be individual.  And, I would argue, there’s a very strong case that intelligence is affected by lots of genes, as a side-effect of kluges, just like autism.
First, over most of human history until the last 250 years, the great bulk of African or non-African humans have been hunters or farmers, with no reading, writing, or test-taking skills, and with natural selection for particular environments apparently focused on the physical (lactose tolerance for European cow use) rather than intelligence-related (e.g., larger brains/new brain capabilities).  That is, there is little evidence for natural selection targeted at intelligence but lots for natural selection targeted at other things. 
Second, as I’ve noted, it appears that in general human traits and diseases usually involve large numbers of genes.  Why should “intelligence” (which, at a first approximation, applies mostly to humans) be different?  Statistically, it shouldn’t.  And as of yet, no one has been even able to find one gene significantly connected to intelligence – which again suggests lots of small-effect genes.
So let’s imagine a particular case.  100 genes affect intelligence variation, in equal amounts (1% plus or minus).  Group A and Group B share all but 10 genes.  To cook the books further, Group A has 5 unique plus genes, and Group B 5 unique minus genes (statistically, they both should have equal amounts plus and minus on average).  In addition, gene variance as a whole is 50% of overall variation.  Then 10/95 of the genetic variation in intelligence (about 10.5%) is explained by whether an individual is in Group A or B. This translates to 5.2% of the overall variation being due to the genetics of the group, 44.8% being due to individual genetic variation, and 50% being due to nurture.
Still, someone might argue, those nature vs. nurture survey participants have got it wrong:  gene variation explains all or almost all of intelligence variation.  Well, that still means that nurture has 5 times the effect that belonging to a group does.  Moreover, under the kluge model, the wider the variation between Race A and Race B, between, say, Jewish-American and African-American, THE MORE LIKELY IT IS THAT NURTURE PLAYS A LARGE ROLE.  First of all, “races” do not correspond at all well to the genetic groups I described earlier, and so are more likely to have identical intelligence on average than the groups I cited.  Second, because group variation is so much smaller than individual variation, group genetics is capable of much less variation than nurture (0-10.5% of overall variation, vs. 0-100% for nurture).  And I haven’t even bothered to discuss the Flynn Effect, which is increasing intelligence over time, equally between groups, far more rapidly than natural selection can operate – a clear indication that nurture is involved.
Variation in human intelligence, I say, isn’t survival of the smartest.  It’s a randomly distributed side effect of lots of genetic kluges, plus the luck of the draw in the culture and family you grow up in.

Thursday, April 26, 2018

Reading New Thoughts: Two Different Thoughts About American History


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

In my retirement, I have, without great enthusiasm, looked at various books on American History.  My lack of interest stems from my impression that most of these, in the past, were (a) overly impressed by certain personalities (Jefferson and Lee spring to mind) and (b) had no ability to take the point of view of foreign actors or even African and Native Americans. 

As it turns out, that is changing, although not universally, and so there are, imho, some fascinating re-takes on some of the historical narratives such as those I found growing up in my American History textbooks.  I’d just like to call attention here to two “new thoughts” about that history that I recently ran across.  I will state them first in abbreviated and provocative form:

1.       President John Tyler may have played the largest role in making the Civil War inevitable.

2.       The most dangerous part of the American Revolution for the American cause was AFTER Valley Forge.

Now, let’s do a deeper dive into these.

John Tyler Did It


The time from the War of 1812 to the Mexican-American War has always been a bit of a blank spot in my knowledge of American History.  The one event I remember that seems to mean much is Jackson’s termination of the Bank of the United States, which subjected the U.S. to severe rather than mild recessions for a century – including, of course, the effect on the Great Depression.

A recent biography of John Quincy Adams adds a great deal to my understanding of this period, even if it is (necessarily) overly focused on Adams himself.  As it turns out, Adams was both involved in American diplomacy abroad from the Revolution to 1830, and also in its politics from 1805 to 1848, when he died.  In that span, he saw three key developments in American politics:

1.       The change in slavery’s prominence in politics from muted to front and center.  We tend to see the Compromises leading up to the Civil War as springing out of the ground at the end of the Mexican War; on the contrary, it appears that the key event is the formation of a pro-slavery Republic of Texas in the early 1830s, along with (and related to) a new, much more uncompromising anti-slavery movement arriving around 1830.  That, in turn, was a reaction to a much more expansionist slave-state politics after the War of 1812. 

2.       A change in the Presidential election system from “election by elites” (electors were much less bound to particular parties) to a much more widespread participation in elections as new states arrived (although, of course, the electorate still remained male, white, and Northern European). 

3.       A related change in the overall political system from “one-person” parties to a real two-party system.  For the value of that, see modern Israel, whose descent from a two-party system to one in which most if not all parties are centered around one person and mutate or vanish whenever that person leaves the scene has brought dysfunction, short-term and selfish thinking, and paralysis to policy-making.

The way in which these changes occurred, however, had a great deal to do with the ultimate shape of the Civil War.  Here’s my take at a quick synopsis:  From 1812 to 1830, the old style of politics, in which each Democratic President effectively “crowned” his successor by making him Secretary of State, was dominant.  During that time, also, there was a kind of “pause” in westward expansion during which the rest of the continent east of the Mississippi became states.  And Adams during his time as Secretary of State arranged with Mexico the acquisition of the territory that later became the Dakotas, Wyoming, Idaho, and (most importantly) Oregon and Washington. 

When Adams was elected, Jackson was the closest competitor (more electoral votes but actually fewer popular votes) in a four-person race.  Inevitably, in the next election, as the power of the new Western states made itself felt, Jackson was a clear victor – but parties were still one-person things.  Jackson as a slave-holder and anti-Native-American bigot certainly did African and Native Americans no favors personally, but did not strengthen the slave states’ political power significantly.  Moreover, his successor, Martin van Buren, is credited as the true founder of the two-party system, very much along today’s lines:  a party of Federal-government spending on “improvements” to supplement state and local spending (the Whigs) vs. a party along Jefferson’s lines of small government (the Democrats) – and these things cut across slavery and anti-slavery positions. 

Inevitably, the Whigs quickly won an election vs. the Democrats, ushering in William Henry Harrison, a governor from the Northwest who promised to rein in the political power of the slave states via the Democrats (for example, they had imposed a “gag rule” to prevent Adams, who was now in Congress, from speaking out against slavery).  And this was important, because between the 1820s and the early 1840s, settlers had created a “slave-state Republic” in Texas with uncertain boundaries, and the slave states were clamoring to accept Texas as a state, effectively upsetting the balance between slave and free states. 

Unfortunately, Harrison died shortly after election, and John Tyler, a slave-state “balanced ticket” politician, took over.  He had no interest in the Whig party – rather, he sought to carve out a position close to the Democratic one, in order to create his own political party and get re-elected.  Thus, he splintered his own party and sectionalized it as well.  What followed, as Polk and the Democrats accepted Texas and acquired the Southwest during the Mexican-American War, simply made inevitable and immediate the “irrepressible conflict.”

 But let’s imagine Harrison hadn’t died, and had instead served two terms.  It is very possible that the slave states would have benefited from “improvements”, and slavery expansion would have become less a matter of absolute necessity in their politics.  It is then possible that the notion of secession would not have been so universally accepted, nor the excesses of slaveholding immigrants in Missouri as easily excused, nor the pressure to make the North conform as extreme.  Thus, Kentucky and Tennessee would not have been as up for grabs as the Civil War started, the Confederacy weaker, and the result less bloody and quicker.

All this is speculative, I know.  But to me, the biggest what-if about the Civil War is now:  What if John Tyler had never gotten his mitts on the Presidency?

It’s Not About Valley Forge


Up until recently, my view of the Revolutionary War, militarily speaking, was that Washington simply hung on despite financial difficulties, defeats, and near-victories squandered by subordinates, until he surmounted his troops’ starvation at Valley Forge and the British, frustrated, shifted their focus to the South.  Then, when Cornwallis in the South failed to conquer there and desperately marched North, Washington nipped down, wiped out his troops at Yorktown, and the war was effectively over, 2 years before the peace treaty got signed. 

However, a new book called “The Strategy of Victory” (SoV) presents a different picture.  It does so by (a) more completely presenting the British point of view, and (b) probing deeper into the roles of militia and regular army in the fighting.

SoV suggests that the overriding strategy of Washington was (a) keeping a trained regular army in existence so that the British in venturing south beyond New York could be defeated on its flank and in detail where possible, and (b) combining that regular army with militia who would, more or less, take the role of long-distance sharpshooters at the beginning of a battle.  At first, that strategy was highly successful, and led to the “crossing the Delaware” victories, followed by a British conquest of Philadelphia that backfired because Washington sat on the communications and supply lines between New York and Philadelphia.  So the British went back to New York.

However, Clinton, the British general in New York, next devised a strategy (and this is after Valley Forge) to catch Washington out of his well-defended “fortress area”, by landing his troops in mid-New Jersey and heading inland.  In fact, he came surprisingly close to succeeding, and only a desperate last stand by a small portion of the militia and army allowed Washington to slip away again.  That was Closest Shave Number One.

Attention then shifted to the South, where a nasty British officer named Tarleton basically moved faster than any colonial resistance and steadily wiped out militia resistance in Georgia, then South Carolina, then into North Carolina.  Had he succeeded in North Carolina, it is very possible that that he would have succeeded in Virginia, and then Washington would have really been caught between two fires.  But a magnificent mousetrap by Daniel Morgan that again combined regulars and militia to firstly tempt Tarleton into battle, and secondly ensure he lost it, saved the day:  Closest Shave Number 2.  Without Tarleton’s regulars to maintain it, the British pressure on the inland Carolinas and Georgia collapsed, and Cornwallis’ move into Virginia had no effect beyond where his army moved, making the Virginia invasion pointless and allowing Washington to trap him.  Meanwhile, General Greene moved into the southern vacuum, his main concern being to preserve his regulars while doing so (again, Washington’s strategy), and was able to take over most of the Carolinas and Georgia again – as SoV puts it, he “took over all of Georgia while losing every battle.”   

However, Yorktown was not the end of the fight for the North.  It was always possible that the British would again sally from New York – if Washington were no longer there.  And so the next two years were a colossal bluff, in which a regular army of a couple of thousand was kept together with spit, baling wire, and monetary promises just to keep the British afraid to venture again out of New York.  Washington’s last address to his troops was not, says SoV, his thanks for faithful service; it was his apology for the lies he told them in order to hold the army together, meant to keep them from revolting rather than disbanding. 

And one other point of this narrative that applies to the historical American “cult of the militia” that still hangs over us in today’s venal legal interpretations of the Second Amendment.  SoV makes it very clear, as the other American History books through to the Civil War also do, that America could not have survived without a regular army, and that militia without a regular army are not sufficient to maintain a free society – whereas, in the Civil War, a regular army without militia made us more free.

Thursday, April 12, 2018

Reading New Thoughts: Grinspoon’s Earth in Human Hands and Facing the Climate-Change-Driven Anthropocene Bottleneck


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

In my mind, David Grinspoon’s “Earth In Human Hands” raises two issues of import as we try to take a long-term view of what to do about climate change:

1.        How best do we approach making the necessary political and cultural changes to tackle mitigation – what mix of business/market, national/international governmental, and individual strategies?

2.       For the even longer term, how do we tackle a “sustainable” economy and human-designed set of ecosystems?  Grinspoon claims that there are two opposing views on this – that of “eco-modernists” who say that we should design ecosystems based on our present setup, ameliorated to achieve sustainability, and those of “environmental purists” who advocate removal of humans from ecosystems completely.

First, a bit of context.  Grinspoon’s book is a broad summary of what “planetary science” (how planets work and evolve on a basic level, with our example leavened by those of Venus and Mars) has to say about Earth’s history and future.  His summary, more or less, is this:  (a) For the first time in Earth’s history, the whole planet is being altered consciously – by us – and therefore we have in the last few hundred years entered a whole new geological era called the Anthropocene; (b) That new era brings with it human-caused global warming and climate change, which form a threat to human existence, and therefore to the existence of Anthropocene-type “self-conscious” life forms on this planet, a threat that he calls the “Anthropocene bottleneck”; (c) it is likely that any other such planets with self-conscious life forms in the universe face the same Anthropocene bottleneck, and other possible threats to us pale in comparison, so that surviving the Anthropocene bottleneck is a good sign that humanity will survive for quite a while.

Tackling Mitigation


Grinspoon is very forceful in arguing that probably the only way to survive the Anthropocene bottleneck is through coordinated, pervasive global efforts:  in particular, new global institutions.  Translated, that means not “loose cannon” business/market nor individual nor even conflicting national efforts, but science-driven global governance, plus changes in cultural norms towards international cooperation.  Implicitly, I think, his model is that of the physics community he is familiar with:  one where key information, shared and tested by scientific means, informs strategies and reins in individual and institutional conflicts. 

If there is anything that history teaches us, it is that there is enormous resistance to the idea of global enforcement of anything.  I myself tend to believe that it represents one side of a two-sided conflict that plays out in any society – between those more inclined toward “hope” and those inclined toward “fear”, which in ordinary times plays out as battles between “liberals” and “conservatives.” 

Be that as it may, Grinspoon does not say there is not resistance to global enforcement.  He says, however, that global coordination, including global enforcement, is a prerequisite for surviving the Anthropocene bottleneck.  We cooperate and thereby effectively mitigate climate change, or we die.  And the rest of this century will likely be the acid test.

I don’t disagree with Grinspoon;  I just don’t think we know what degree of cooperation will be needed to deal with climate change in order to avoid facing the ultimate in global warming.   What he describes would be ideal; but we are very far from it now, as anyone watching CO2 rise over at Mauna Loa is well aware.   Rather, I think we can take his idea of scientifically-driven global mitigation as a metric and an “ideal” model, to identify key areas where we are falling down now by failing to react quickly, globally, and as part of a coherent strategy to scientific findings on the state of climate change and means of mitigation. 

Designing Sustainability


Sustainability and fighting climate change are not identical.  One of the concerns about fighting climate change is that while most steps toward sustainability are in line with the quickest path to the greatest mitigation, practically, some are not.  This is because, for example, farming almonds in California with less water than typical almond farming does indeed reduce the impact of climate-change-related water shortages, but also encourages consumption of water-greedy almonds.  I would argue, in that case, that the more direct path towards climate-change mitigation (discouraging almond growing while reducing water consumption in general) is better than the quicker path towards sustainability (focus on the water shortage).

This may seem arcane; but Grinspoon’s account of the fight between environmental traditionalists and eco-modernists suggests that the difference between climate-change-mitigation-first and sustainability-first is at least a major part of the disagreement between the two sides.  To put it another way, the traditionalists according to Grinspoon are advocating “no more people” ecosystems which effectively minimize carbon pollution, while the eco-modernists are advocating tinkering incrementally with the human-driven ecosystems that exist in order, apparently, to achieve long-term sustainability – thereby effectively putting sustainability at a higher priority than mitigation.

It may sound as if I am on the side of the traditionalists.  However, I am in fact on the side of whatever gets us to mitigation fastest – and here there is a glaring lack of mention in Grinspoon of a third alternative:  reverting to ecosystems with low-footprint human societies.  That can mean Native American, Lapp, Mongolian, or aborigine cultures, for example.  But it also means removing as far as possible the human impact exclusive of these cultures.

Let’s take a recent example:  the acquisition of Native Americans with conservationist aid of land around the Columbia River to enable restoration and sustainability (as far as can be managed with increasing temperatures) of salmon spawning.  This is a tradeoff:  In return for removal of almost all things exacerbating climate change, possibly including dams, the Native Americans get to restore their traditional culture as far as possible in toto.  They will be, as in their conceptions of themselves, stewards of the land. 

And this is not an isolated example (again, a recent book limns efforts all over the world to take the same strategy).  An examination of case studies shows that even in an impure form, it provides clear evidence of OVERALL negative impact on carbon pollution, while still providing “good enough” sustainability.  Nor do I see reflexive opposition from traditionalists on this one.

The real problem is that this approach is only going to be applicable to a minority of human habitats.  However, it does provide a track record, a constituency, and innovations useful for a far more aggressive approach towards mitigation than the one the eco-modernists appear to be reflexively taking.  In other words, it offers hope for an approach that uses human technology to design sustainable ecosystems, even in the face of climate changes, with the focus on the ecosystem first and the humans second.  The human technology can make the humans fit the ecosystem with accommodation for human present practice, not the other way around.

In summary, I would say that Grinspoon’s idea of casting how to deal with mitigation and sustainability as a debate between traditionalists and modernists misses the point.   With all the cooperation in the world, we still must push the envelope of mitigation now in order to have a better chance for sustainability in the long term.  The strategy should be pushed as far as possible toward mitigation-driven changes of today’s human ecosystems, but can be pushed toward what worked with humans in the past rather than positing an either/or humans/no-humans choice.

Monday, March 5, 2018

Reading New Thoughts: Harper’s Fate of Rome and Climate Change’s Effect On Empires


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Note:  My focus in these “reading new thoughts” posts is on new ways of thinking about a topic, not on a review of the books themselves.
Kyle Harper’s “The Fate of Rome” provides new climate-change/disease take on that perennial hot topic “reasons for the fall of Rome.”  Its implications for our day seem to me not a reason to assume inevitable catastrophe, but a caution that today’s seemingly resilient global economic structures are not infinitely flexible.  I believe that the fate of Rome does indeed raise further questions about our ability to cope with climate change. 
Climate Change’s Effect on the Roman Empire
As I understand it, “Fate of Rome” is presented as a drama in 5 acts:
1.      A “Roman Climatic Optimum” or “Goldilocks” climate in the Mediterrean allows the development of a state and economic system, centered on Rome and supporting a strong defensive military, that pushes Malthusian boundaries in the period up to about 150 AD.
2.      A transitional climatic period arrives, and runs for 200 years.  For several decades, the Plague of Cyprian (smallpox) rages and some regions tip into drought, leading to 10-20% population losses, and massive invasion against a weakened military.  Order is then restored at a slightly lower population level from that in 150 AD.
3.      At about 240 AD, another plague (probably viral hemorrhagic fever) arrives, accompanied by widespread drought in most key food-supplying regions (Levant, north Africa, and above all Egypt).  Northern and eastern borders collapse as the supply of soldiers and supplies dries up.  Again, recovery takes decades, and a new political order is built up, breaking the power of Roman senators and creating a new city “center” at Constantinople.
4.      At around 350 AD, a Little Ice Age arrives.  Climate change on the steppe, stretching from Manchuria to southern Russia, drives the Hsiungnu or Huns westward, pushing the existing Goth society on Rome’s northern border into Roman territory.  Rome’s western empire collapses as this pressure plus localized droughts leads to Gothic conquests of Gaul, Spain, North Africa, and Italy.  Rome itself collapses in population without grain shipments from abroad, but the economic and cultural structure of the western Roman state is preserved by the Goths.  In the early 500s, as the Eastern Empire recovers much of its population and economic strength, Justinian reconquers North Africa and much of Italy, again briefly and partially reconstituting the old Roman Empire.
5.      At 540 AD or thereabouts, bubonic plague driven by changes in climate for its animal hosts in central Asia arrives from the East.  The details of the illness are horrific and it is every bit as devastating as the Black (also bubonic) Plague in medieval times – 50-60 % of the population dead, affecting rich and poor, city and rural equally, with recurrence over many decades.  Only Gaul and the north, now oriented to a different economic and social network, are spared.  Villas, forums, and Roman roads vanish.  The Eastern frontier collapses, again due to military recruit and supply decimation, and a prolonged deathbed struggle with Persia ends in the conquest of most of both by Islam in the early 600s.  The only thing remaining of the old Roman state and its artifacts is a “rump” state in Anatolia and Greece. 
Implications for Today
As Harper appears to view it, the Roman Empire was a construct in some ways surprisingly modern, and in some ways very dissimilar to our own “tribe”-spanning clusters of nation-states.  It is similar to today in that it was a well-knit economic and cultural system that involved an effective central military and tax collection, and could effectively strengthen itself by trade in an intercontinental network.  It is dissimilar in that the economic system (until near the end) funneled most trade and government through a single massive city (Rome) that required huge food supplies from all over the Mediterranean; in that for most of its existence, the entire system rested on the ability of the center to satisfy the demands of regional “elites”, thus impoverishing the non-elite; and in that they had none of our modern knowledge of public health and medicine, and thus were not able to combat disease effectively. 
What does this mean for climate change affecting today’s global society?  There is a tendency to assume, as I have noted, that it is infinitely resilient:  Once disaster takes a rest in a particular area, outside trade and assistance complement remaining internal structures in recovering completely, and then resuming an upward economic path.  Moreover, internal public health, medicine, and disaster relief plus better advance warnings typically minimize the extent of the disaster.  The recovery of utterly devastated Japan after WW II is an example.
However, the climate-change story of Rome suggests that one of these two “pillars of resilience” is not as sturdy as we think.  Each time climate-change-driven disasters occurred, the Roman Empire had to “rob Peter to pay Paul”, outside military pressures being what they were and trade networks being insufficient for disaster recovery.  This, in turn, made recovery from disaster far more difficult, and eventually impossible. 
Thus, recovery from an ongoing string of future climate-change-driven disasters may not be sufficiently able to be internally driven – and then the question comes down to whether all regional systems face ongoing disasters unable to be handled internally, simultaneously.  Granted, the fact that our system does not depend on regional elites or fund a single central city are signs of internal resilience beyond that of Rome.  But is the amount of additional internal resilience significant?  This does not seem clear to me.
What remains in my mind is the picture of climate-change-driven bubonic plague in Rome’s interconnected world. People die in agony, in wracking fevers or with bloody eyeballs and bloody spit, or they simply drop dead where they are, in twos and threes.  Death is already almost inevitable, when the first symptoms show, and there is no obvious escape.  If by some miracle you live through the first bout, you walk in a world of the stench of unburied bodies, alone where two weeks ago you walked with family, with friends, with communities.  All over your world, this is happening.  And then, a few years later, when you have begun to pick up the pieces and move back into a world of many people, it happens again.  And again.
If something like that happens today, our world is not infinitely resilient.  Not at all.   

Friday, March 2, 2018

The Transition From Agile Development to the Agile Organization Is Beginning to Happen


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Recently, new publications from CA Technologies via Techtarget arrived in my in-box.  The surveys mentioned in them confirmed to me that the agile organization or agile business is beginning to be a Real Thing, not just vendor hype. 
Five years ago, I wrote but did not publish a book on the evidence of agile development’s benefits and implications for creation of a truly agile business or other organization.  Now, it appears not only that the theoretical foundation has been laid for implementation at least of agile processes in every major department of the typical business, but also that a significant subset of businesses now think that they have at implemented agile according to that foundation across pretty much all of the enterprise – yes, apparently in a few cases including legal (what does legal-department agility mean?  I have no idea, yet).
So what are the details of this evidence?  And what benefits of an agile organization seem to be proving themselves?

The Solid Foundation of Agile-Development Benefits

It is now approaching a truism that agile development delivers benefits compared to traditional software-development approaches.  My own survey during my brief re-up at Aberdeen Group 9 years ago suggested improvements in the 20-30 % range for project cost and speed, product quality, and customer satisfaction (with the obvious implication that it also decreased product-development risk by around that amount, as Standish Group studies also showed).  One striking fact was that it was achieving comparable results when compared to traditional approaches focused on cost and/or quality.
One CA pub (“Discover the Benefits of Agile:  The Business Case for a New Way to Work) extends these findings to agile development plus agile project management.  It says that a “summary of research” finds that agile delivered 29 % improvements in cost, 50 % in quality, 97 % in “productivity” (something like my “speed”), 400 % (!) in customer satisfaction, 470 (!!) in ROI (a proxy for revenue and profit), compared to the “least effective” traditional approaches. 
While this may sound like cherry-picking, my research showed that the most effective traditional approaches were not that much better than the least effective ones.  So I view this CA-cited result as the “practice effect”:  experience with agile development has actually increased its advantage over all traditional approaches – in the case of customer satisfaction and profitability, by really large amounts.

The Theoretical Case For Business Agility

Note, as I did back when I did my survey, that agile development often delivers benefits that show up in the top and bottom line of the success-critical-software-developing business, even before the rest of the organization attempts to go agile.  So why would it be important to go the rest of the way and make most or all of the organization agile?
The CA pub “The State of Business Agility 2017” plus my own work suggest that potential benefits of “agile beyond software development” fall into three areas: 
1.      Hard-nosed top and bottom line benefits:  That is, effects on revenue, cost, and margin (profit).  For example, better “innovation management” via agile project management goes to the top line and eventually to the bottom line, and in some cases can be measured.
2.      “Fuzzy” corporate benefits, including competitive advantage, quality, customer satisfaction, speed to act and change strategies, and reduction in negative risks (e.g., project or IT-infrastructure failures) and “fire drills”.
3.      “Synergy” benefits stemming from most of the corporation being on the same “agile page” with coordinated and communicating agile processes, including better collaboration, better/faster employee strategy buy-in, better employee satisfaction through better corporate information, and better “alignment between strategy and execution.”
The results of the CA business-agility pub survey suggest that most respondents understand many but not all of these potential benefits before they take the first step towards the agile business.  I would guess, in particular, that they don’t realize the possible positive effects on combating “existential risks”, such as security breaches or physical IT-infrastructure destruction, as well as the effects on employee satisfaction and better strategy-execution alignment.

The Extent of Agile Organizations and Their Realized Benefits

Before I begin, I should note two caveats about the CA-reported results.  The first is that respondents are in effect self-selected to be farther along in agile-organization implementation and more positive about it.  These are, if you will, among the “best and the brightest.”  So actual agile-organization implementations “on the ground” are certainly far less than the survey suggests.
Second, CA’s definition of “agile” leaves out an important component.  CA’s project-management focus makes part of its definition of agility to be holding time and cost in a project constant while varying project scope.  What that really means is that CA de-emphasizes the ability to make major changes in project aims at any point in the project.  In the agile-organization survey, this means an entire lack of focus on the ability to incrementally (and bottom-up!) change a strategy rather than just roll out a whole new one every few years.  And yet, “more agile” strategy change is at the heart of agility’s meaning and is business agility’s largest long-term potential benefit.
How far are we towards the agile organization?  By some CA-cited estimates, 83% of all businesses have the first agile-development step at least in their plans, and a majority of IT projects are now “agile-centric.”  Bearing in mind caveat (1) above, I note that 22% of CA-survey respondents say they are just focused on extending “agile” to IT as a whole, 17% are also working on a plan for full business agility, 19% have gotten as far as organizational meetings to determine agility-implementation goals, and 39% are “well underway” with rollout.  Ignoring the question about “momentum” in partial departmental implementations for a moment, I also note that 47 % say IT is agile, 36% that Project Management is, and marketing, R&D, operations/manufacturing (are they counting lean methodologies as agile?), and sales (!) are a few percentage points lower. 
Getting back to partial implementation, service/support seems to be the “new frontier.”  Surprisingly, corporate communications/PR is among the laggards even in implementation, along with accounting/finance, HR, and legal.  What I find interesting about this list is that accounting and legal are even in the conversation, indicating that people really are planning for some degree of “agile” in them.  And, of course, the CEO’s agility isn’t even in the survey – as I said 9 years ago, the CEO is likely to be the last person in the organization to “go agile.”  Long discussion, not relevant here.
How about benefits?  In the hard-nosed category, agile organizations increase revenue 37% faster and deliver 30% greater profit (an outside survey).  For the rest of benefits, there is far less concrete evidence – the CA business-agility survey apparently did not ask what business benefits respondents had already seen from their efforts.  What we can deduce is that most of the 39% of respondents who said they were “well underway” believe that they are already achieving the benefits they already understand, including most of the “fuzzy” and “synergy” benefits cited above.

Implications:  The Cat Is In The Details

At this point, I would ordinarily say to you the reader that you should move towards implementing an agile business/organization, bearing in mind that “the devil is in the details.”  Specifically, the CA surveys note that the complexity of the business and cultural/political opposition are key (and the usual) problems in implementation.  And, indeed, this would be a useful thing to know.
However, I also want to emphasize that there is a “cat” in the details of implementation:  a kind of Schrodinger’s Cat.  In quantum physics, as I understand it, different states of basic particles (e.g., exists, doesn’t exist) are entangled until we disentangle them (e.g., by “opening the box” and measuring them).  Schrodinger imagined entangled states of “cat inside the box/no cat inside the box”, so that we wouldn’t know whether Schrodinger’s Cat existed until we opened the box.  In the same way, we not only don’t know what and how much in the way of benefits we get until we examine the implementation details, we won’t know just how really agile we are until we “open the box.”
Why does that matter?  Because, as I have noted above, the really big long-term potential benefit of business agility, is, well, agility:  The ability to turn strategically, instantly, on a dime and ensure the business’ long-term existence, as well as comparative success, much more effectively.  Just because some departments seem to be delivering greater benefits right now, that doesn’t mean you have built the basic infrastructure to turn on a dime.
And so, the cat is in the details.  Please open the box by testing whether your implementation details allow you to change overall business strategies fast, and then, if there is no cat there, what should you do?
Why, change your agile-implementation strategy, of course.  Preferably fast.


Wednesday, February 28, 2018

Reading New Thoughts: Lifton’s Climate Swerve and the Proper Attitude Toward Climate Change


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

One of the major issues among the “good guys” in climate change is, what attitude should we take towards the future in our political maneuverings?  Should we focus on the bright spots, the signs of hope, such as solar technology, knowing that we may be accused later of deception because these do not meet the needs of mitigating carbon pollution effectively?  Should we be brutally realistic, at the risk of persuading people that nothing effective can be done?

I find that Robert Jay Lifton’s “The Climate Swerve” provides a boost, more or less, to my own view of what we should do.  Based on his experience as a psychiatrist and physician fighting against nuclear war, he identifies 3 “psychologies” that dominate discussion of an oncoming catastrophe:

1.       Denial.  We are all familiar with climate deniers.

2.       Psychic numbness”.  In this case, we “numb” the idea of nuclear war or climate change so that we can function in daily life without extreme anxiety.  The result of psychic numbness is that we feel that there is nothing we can do about the situation, and so we do very little.

3.       Facing the truth head-on.  The point here is that because we no longer self-deceive, this does not necessarily lead to extreme anxiety that makes one unable to function.  Instead, says Lifton, it leads to “realistic hope.”  That is, in terms of anxiety, in the long run some hope is better than none.  

Thus, Lifton’s “climate swerve” is a global “swerve” – a global change of direction in thinking, towards psychology (3) as discussed above.

Note that this analysis is not the usual glib, other-oriented, sickness-focused psychoanalysis.  Rather, Lifton is talking about a global set of non-patients and personal experience.  Also, he is talking about the long term:   While climate denial is usually evidence of the usual psychological problems right now, psychic numbness is akin to what most of us do often in our lives, and its costs often outweigh its benefits only in the long run.

Facing What Truth?


Climate change differs from nuclear war in one key way:  In nuclear war, the catastrophe is immediate and total, while in climate change, the largest effects in the catastrophe are always in the future.  That means that in facing the truth, we need to face two things:  (1) What is the sequence of catastrophe in “business as usual” climate change, and (2) What are the effects of our efforts to mitigate and adapt instead of “business as usual”?

I find that the best analogy I can come up with for climate change’s sequence of catastrophe is an image of an enormous rock rolling down hill, picking up speed and momentum (I wrote a “children’s tale” short story about this once).  At first, it only kills a few shepherds high on the hill; but next it will kill the poor folk partway up the hill that cannot afford the housing of the well-off and rich; and then, finally, it will roll over us, the relatively well-off and rich.  Crucially, however, the earlier we push back against the rock (mitigate, slow the rate of carbon emissions), the easier it is to stop it, and the higher on the hill it stops.  In other words, no matter whether we’re talking now or 50 years from now:

·         Some of the disaster to come is has already happened and will continue to happen; but,

·         A far greater amount is already built into the system; BUT,

·         A far greater amount than that is not yet built into the system; AND,

·         The more we mitigate now, the less of that not-built-into-the-system “business as usual” catastrophe will happen.

The details of the sequence of the “business as usual” catastrophe are still far from completely clear.  The best analysis I can find is a 2007 book called “Six Degrees.” I hope to write about that book at some point, but the main point to bear in mind is that the sequence of events still seems to be following that book’s horrifying projections, although each step they lay out may require more than 1 degree C warming over the long term. 

What about how we are doing?  What constitutes facing the facts about our efforts to mitigate?

Right now, I have argued in blog posts, CO2 readings at Mauna Loa tell us that all our previous efforts, if they have had an impact on carbon emissions, have had an insignificant one.  I ascribe part of this to a well-known IT law:  the actual implementation of a new technology or approach is actually far slower than what we perceive superficially from the outside.  Even with the best will in the world, the details of implementation slow us down drastically.  The other reason, of course, is the extensive denial and psychic numbness out there that lead to pushback and lack of implementation.

The other important point about our efforts to mitigate is that they are hindered by our institutions and our attitudes towards them.  History shows that looking for a purely market-based solution is not only far from optimal but a fantasy about a “free market” that never existed.  Governments and the global society are hindered by past assumptions, and especially in the legal system, about what can be done in a democratic government to face climate change.  A “face the facts” view of what is going on says that institutional efforts to combat climate change have an orders-of-magnitude greater impact on mitigation than individual efforts, and that these institutional efforts have barely begun. 

What hope are we left with?  This one:  that eventually our best institutional efforts will kick into overdrive and actually mitigate climate change significantly. 

Solar Vs. Fossil: One Step Forward, Two Half-Steps Back


I find that one way to summarize this view to myself is to put it in terms of the computer industry’s Agile Manifesto, where saying one thing should be put before another is saying not that the latter should not be done, but rather that I value the former more highly:

·         Realistic facing of present and future facts of climate change catastrophe before blind hope.

·         Institutional change before individual change.

·         Mitigation before adaptation.

·         Agility before flexibility (I’m not sure whether this should be included, but it would be a good way to improve our institutions to fight against climate change better).

How to end this?  Well, there’s always T.S. Eliot’s “As Wednesday” on psychic numbness:

“Because I do not hope to turn
Because I do not hope …”

First, face the facts of turning.  Then, understand the small hope in those facts.  Then you can hope to turn.