Saturday, February 24, 2018

Reading New Thoughts: Struzik’s Firestorm and How Climate-Change-Driven Wildfires Affect Us All

Disclaimer: I am now retired, and am therefore no longer an expert on anything. This blog post presents only my opinions, and anything in it should not be relied on.

Note: My focus in these “reading new thoughts” posts is on new ways of thinking about a topic, not on a review of the books themselves.

Edward Struzik’s “Firestorm: How Wildfires Will Shape Our Future” adds, it seems to me, three important points to my understanding of climate change:

1. We are on the verge of an era in which wildfires more massive than we have ever seen produce harmful effects of which we have seen only glimpses: shattering of ecosystems, traveling of mercury pollution around the world, blackening of ice that hastens melting and sea level rise, and of course death and destruction.

2. Understanding of climate change’s reality “on the ground” is no longer limited to scientists and environmentalists, but is a fundamental reality of firefighters who must anticipate each season’s worsening challenges.

3. We are near a breaking point in terms of our overall societal response to wildfires, as evidenced by the fact that the majority share of US and Canadian budgets for forestry management is now being devoted to firefighting rather than planning, researching, and holistic approaches to forest management that would mitigate the upcoming effects cited in (1).

Upcoming Global Harmful Consequences of Wildfires

One virtue of Struzik’s Firestorm is that it goes into extensive detail about the actual effects of wildfires, on the forests, on neighboring humans and ecosystems, on human-generated toxins such as mercury which past resource extraction has left in the forests, and (via airborne carriage of wildfire byproducts) on geographies as far removed as British Columbia from New York and Alaska from Greenland. He tells us also of efforts to contain these harmful consequences, including pre-emptive “back-burning”, forecasting and planning to fight fires in locations such as Banff, and strengthening building codes and evacuation procedures in places such as Alberta near the oil sands.

The overall picture is of an entire region – the forests of western Canada and the US, certainly echoed in Australia and northern Russia, and probably echoed in areas such as Indonesia – increasingly subjected to wildfires whose massive intensity and destructiveness is hard to express. Two key factors drive this future of massive wildfires: the legacy of forest management that for a century did not burn these forests and thus increased the power and ecosystem destruction of these burnings, and climate change that is bringing increasing drought, greater energy for the wildfires, and new invasive species that combine with wildfires to exacerbate the resulting damage.

What harmful effects should we really be concerned about above all? As I understand it, deaths from being trapped in a wildfire, horrible as they are, are the least damaging of these. The following seem of greater import:

· Death from ingesting or breathing the byproducts of wildfires, at a distance from the fire itself. Struzik cites the French fires that caused the deaths of thousands of Parisians in the early 2000s. Upcoming wildfires are likely to produce more intense and therefore more deadly byproducts, and to affect regions much further away than the distance between two regions in France.

· The destruction of northern ecosystems (e.g., trees, caribou, polar bear) and replacement by impoverished more southerly ecosystems prone to erosion and collapse (tundra). In other words, the new ecosystems not only decimate existing northern species but replace them with more temperate ecosystems that are far less functional (and therefore less arable) than the temperate ecosystems we have now. To put it bluntly, if humanity looks to survive in the future on the bounty of Canada and Siberia, wildfires are going to make that far more difficult.

· There is a strong danger of increased carriage of black soot (black carbon) to areas of existing land and sea ice in the Arctic (apparently, not in the Antarctic). This may well speed up Greenland land ice melt and Arctic sea ice seasonal melting significantly, thus turbocharging that part of sea level rise. So far, this seems less of a factor, but with the increasing power of wildfires, all bets are off.

People Start Seeing Climate Change In Their Jobs

To me, one of the striking things in Struzik’s book is the extent to which western firefighters are having their noses rubbed into the fact of climate change. Granted, this awareness is centered in those firefighting coordinators who must plan for each season’s likely wildfires. However, Struzik suggests that any experienced wildfire fighter recognizes the differences from 20-30 years ago – and certainly some awareness should be rubbing off on newbies.

To me, this puts the debate about climate change on a whole different level. Generally, firefighters are part and parcel of communities; they can’t be written off as “outside” environmentalists and scientists. And climate change is not something they can face or not face as part of being a well-rounded person outside of their jobs – handling climate change is now an integral part of their jobs. At the very least, this ought to change somewhat the conversation from caricatures of “us vs. them” or “effete soft-hearted eggheads” vs “hard-headed real-world types.”

The Wildfire Breaking Point

If there is a sense of urgency in Struzik’s Firestorm, it lies primarily in his worries about our responses to the increasing threat over the last 30 or so years. He documents how very recent fires such as the one near Fort McMurray came very close to being far, far worse in terms of lives lost and destruction of valuable property. He suggests that although there has been a massive increase in the knowledge of how to manage wildfires for the best combination of destruction followed by ecosystem repair, minimal long-term impact on human and plant/animal environment, and long-term solutions to the increasing pressure of humans on forest environments, these have been far from widely applied in the field. Instead, asserts Struzik, lack of government and other funding means that, more and more, long-term strategy is coming in a poor second to simply managing to contain the next season’s fires.

Inevitably, then, unless things change, the system will reach a point where each season, the costs of wildfires will mount catastrophically, because not only do budgets not cover all the firefighting needed but the accumulated “debt” of things undone in previous seasons will add to the destruction. In other words, to get back to anything approximating today’s halcyon days will require far more planning, back-burning, and ecosystem repair than is required now – if it can be done at all.

The answer, I think, is that, like Struzik, we need to see our efforts with regard to wildfires as an integral and inevitable part of our climate-change spending. There is far less argument about adaptation than mitigation, and, unfortunately, probably far more spending on adaptation than mitigation. Wildfire strategy is primarily an adaptation strategy – it affects carbon pollution, but much less than fossil-fuel combustion. Therefore, there should be much less resistance to this type of approach and spending. One hopes.

Wednesday, February 21, 2018

Climate Change 2018: That Was The Year That Wasn't

Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
We begin our experience of climate change in 2018 with the legacy of 2017, a year that was in many ways the worst so far.  It began with a new US President committed to reversing the minor gains against carbon emissions that the “lead dog” US had already achieved, and with unprecedented off-season Arctic sea ice melting.  It ended with massive out-of-season climate-change-driven wildfires in California, four hurricanes together packing unprecedented force and causing thousands of deaths (Puerto Rico) and close-to-unprecedented physical damage (in dollars), apparent increases in US carbon emissions after 2 years of declines, and unprecedented Arctic warmth in December.  And those are just the lowlights.
In the year since I retired, I have had the chance to read extensively if capriciously in climate change literature, and I hope to share some of those books’ insights with readers in later posts.  Here, I want to briefly note some of the key initial climate change trends of 2018:
·         Atmospheric CO2 continues its relatively rapid pace of increase

·         Arctic sea ice is at a historic low for this time of year, and global sea ice at an all-time low

·         Solar energy cost gains are counteracted by inadequate country emissions pledges and US backsliding

CO2 Increases:  The Broken Record

The important thing to remember about atmospheric CO2 measurements is that they tell us how we are really doing.  You will see all sorts of encouraging (and discouraging) developments that should affect carbon emissions over the course of the year, especially the ones that claim to measure whether global emissions are up or down.  However, global emission measures are flawed by self-reporting and incomplete data, which may increasingly underestimate the emissions.  Atmospheric CO2, measured since 1959 at Mauna Loa in Hawaii, provides not only a measure of overall emissions but also a reality check as to whether our efforts at curbing human and human-related emissions are bearing fruit.
In February 2018, as it seems I have said many times before – so many times that I sound like a broken record – atmospheric CO2 continues to increase at an unprecedented pace, all things considered.  Initial indications are that 2017 CO2 increased by 2.11 ppm, less that the 3 ppm the previous two years.  However, this is a drop of about 0.9 ppm from 2 El Nino years, while the only comparable El Nino year in the past, 1998, saw a drop of about 2 ppm the next year.  Meanwhile, with February ¾ done, the increase for this month appears to be about 2.4 ppm.
The result is that it is almost certain that atmospheric CO2 is about 408 ppm, up 8-9 ppm since 2015.  While this is less than I feared 1 ½ years ago, it still suggests that we will reach 410 ppm some time around the end of this year and 420 ppm in 2022 – and we have already seen the drastic effects of breaching 400 ppm.

Arctic Sea Ice:  What Does Not Stay in the Arctic

For this, the best I can do is quote Joe Romm and Michael Mann (  “2018 has already set a string of records for lowest Arctic sea ice … [but] what happens in the Arctic doesn’t usually stay in the Arctic”  because this low Arctic sea ice weakens and moves the polar vertex (wintertime circular winds around the North Pole), driving relatively cold air south where it impacts both northern America as far south as Florida and northern Eurasia.  So what we are seeing is both extreme cold from this disruption, and extreme warmth when the disruption is not operating (as now, when I am seeing temperatures almost 40 degrees F above normal near Boston).
This is part of a year-round disruption of once-normal Arctic wind patterns leading to “acceleration” of “slowing down of ocean currents, … weather extremes like droughts, wildfires, floods, and superstorms …  [and] faster melting of the land-based Greenland ice sheet, which in turn drives the speed up in sea level rise that scientists reported last week.” 
Nor should we be complacent about Antarctic ice melting.  As noted, Antarctic land ice melt is the key to huge world sea level rise, and melting of Antarctic sea ice that plugs the glaciers conveying land ice to the sea for melting is therefore a prerequisite for huge world sea level rise.  The fact that global (Arctic plus Antarctic) sea ice has reached a record low in the last few weeks indicates that Antarctic sea ice is also at a low point, and last year’s Antarctic sea ice data backs that up.

Solar Vs. Fossil: One Step Forward, Two Half-Steps Back

There is no doubt in my mind that the major encouraging news of the past year has been the driving down of the cost of solar-power generation and installation, to a point well below that of oil, natural gas, and coal.  Moreover, increasingly, despite the lack of adequate solar-battery technology to guarantee no-blackout solar plus wind, the increased production of solar batteries and their lowered cost does make regional almost-no-blackout solar-plus-wind cost-effective for the majority of power in most world regions.  These technological improvements should continue unabated in 2018, and they are now empowered by NGOs, some governments, and entrepreneurs to a surprising extent.
However, a new UN publication assesses the emissions pledges of governments at or since the 2016 Paris conference, and finds that 2030 fossil-fuel emissions will be up in 2030 compared with 1990 if these pledges are fulfilled, while 2050 fossil-fuel emissions will be up in 2050 compared with 2030.   Combined with projected rising population until about 2050 that leads to rising non-fossil-fuel emissions (e.g., cows with methane, deforestation), this pattern of pledges may lock countries more firmly into efforts that are inadequate for a 2 degrees Centigrade goal.  Therefore, like Alice Through the Looking-Glass, we are failing to run fast enough to stay where we are, and have effectively taken a half-step back.
Another half-step, I believe, comes from the extensive efforts of the Trump administration to undo Obama-era (and previous) regulations, incentives, enforcement, and measurement related to climate change.  Over the past year, for example, enforcement actions have apparently gone down 44 %, solar incentives are rapidly moving from positive to negative, regulations on things like LED lightbulbs and Energy Star labelling are undercut, and satellites key to measurement of things like Arctic sea ice are under threat or under repair from underfunding, while communication of the data suffers from extreme removal of climate change considerations.  No wonder the US appears to have seen a rise in emissions in 2017 compared to a decline in the previous two years.  And this Trump-administration effort continues to grow in scope in 2018.

Conclusion:  That Was the Year That Wasn’t

Way back when (1962-1963), a TV show took a satirical look at the news of the week with the title, “That Was the Week That Was.”  It seems to me, taking a cynical look at 2017 and our efforts to deal with climate change, that that was the Year That Wasn’t – wasn’t in net terms a real break from the “business as usual” of 2010 and before – while at least 2016 saw a major shift in reporting on climate change, some people’s and governments’ attitudes, and at least somewhat of a shift in emissions themselves. 
Will 2018 be another Year That Wasn’t?  Too early to tell.  But we couldn’t afford 2017.  And, to a greater extent, we can’t afford another year like it.

Sunday, November 12, 2017

How the Rich Can Pay Only 3 % of Their Actual Income in Taxes – Without Tricks

Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
No, it’s not from hiding the money overseas, or tax-saving tricks that skirt the outer edges of US legality.  It’s from the peculiarities of the way the tax code handles capital gains taxes.  And a recent change to the estate laws to allow a “step-up in basis” is a big part of it.  Can you, the average reader, take full advantage of it?  Only if (a) much of your net worth is in stocks (preferably low-expense index funds), and (b) 9% times your total stock value is much greater than your yearly expenses.

What follows is an explanation using a “typical case” approximating the real-world experience of someone I know.

The Idle Rich

Let us suppose that you have net worth of $10 million, entirely invested in a Vanguard or Fidelity S & P 500 index fund (expense ratio 0.1 %), with little or no work income and expenses of about $250,000 per year.  You reinvest dividends immediately back into the index fund.  Studies suggest that over the long term such an investment increases by 10.85 % per year.  If we subtract the expense ratio from that, you may expect actual income from this investment to increase by 10.75 % every year, or $1,075,000.  Presto, you are a millionaire this year, without working a day for it.

Now you need to pay income taxes – and here’s where things get complicated.   The money comes to you in two forms:  (1) dividends, which you reinvest, and (2) “capital gains” (all the rest).  My comparison of the S & P 500 total return index (which includes dividend reinvestment) to the S & P 500 index we all see on the financial pages suggests that in the long run, dividends increase your investment by 2.25 % per year ($225,000), leaving 8.5 % ($850,000) from capital gains.  The tax code handles these two cases differently.


Your Federal tax bite from dividends is pretty straightforward:  15 % (let’s assume you live in a state without dividend and capital gains taxes of its own; note also that at about $25 million in investment wealth it kicks up to 20%).  So on the order of 2.25 % of your income tax for this year is at this 15 % rate, or about 2.25 % of your actual income (about $27,000).

Capital Gains

Capital gains are defined as the amount your stocks in that index fund have appreciated since you first bought them.  To maximize the yearly gain in net worth (for reasons too complicated to explain here), you are going to pay your expenses of $250,000 by selling stocks from your index fund. 

Let’s assume, for the sake of simplicity, that all your stocks are effectively just over 1 year old – as in the case where this is a new inheritance (explained later).  You typically have three choices of what you’re going to tell your index manager to sell: “first in, first out [FIFO]”, “average cost [basis]”, or “modified last in, first out [LIFO]” (you won’t necessarily see FIFO and LIFO presented that way).  FIFO, which really means “first bought, first sold”, is a no-brainer:  that means the oldest stocks with the biggest capital gains get sold first, so you typically don’t want to do that.  Average cost” is, as it seems, halfway between worst and best in most cases.  LIFO means “first bought, first sold”, and modified LIFO means that you actively go in when you sell from your portfolio of stocks or index fund and arrange to not sell stocks bought less than a year ago – because otherwise your capital gains tax just about doubles.  In our simple case, all three selling approaches get the same result.

Your capital gain on your $250,000 of yearly expenses is about 9 % (0.1075/1.1075).  A Federal tax rate of 15 % therefore leaves you taxed about an additional 0.14%, on top of the 2.25 % from dividend income.

Finally, index funds steadily buy and sell stocks throughout the years on their own hook, in order to reflect stocks entering and leaving the index.  In the case of the S & P 500, a typical year might see 10 stocks “turn over” like this, usually the ones least capitalized, for a rough guesstimate of 1 % of the index’s total value.  That’s a straightforward additional 0.15 %.


Your total tax bite from your actual total income, therefore, is about 2.55 % of $1,075,000, or about $27,400.  However, this now flows into gross income, which is then “adjusted” for various tax breaks and then converted to “taxable income” via the usual standard or itemized deductions.  The typical rich household (married filing jointly) has at least $12,700 from these sources – so the net is at or below $14,700 (more like 1.45 of actual income %!).  Over the long term, this will creep up as the average stock gets “older”, but I will anticipate that discussion below and state that it effectively stays below 5 % for quite a long time.

[Note:  Here I don’t discuss the recently-added Net Investment Income Tax of 3.8 %, which appears to apply only to much richer individuals]

Thomas Piketty in his book Capitalism in the Twenty-First Century points out that the top 0.1 % of the US income distribution (which roughly corresponds, I believe, to $10 million and up in net worth) has the bulk of their net worth not in land or other assets, but in investments, mainly in stocks.  I conclude that, therefore, we may expect the savvy rich person to pay perhaps 4 % of his or her actual income – not the income declared in tax returns, but with the additional income from untaxed capital gains added back in – in income taxes this year.

In the Long Run, The Rich Are Dead, But They Still Don’t Pay Much Income Tax

There are two caveats that do apply to my scenario, which indeed drive the income tax rate of the rich higher.  However, on closer examination, they don’t really increase the tax rates of the rich that much.  These two caveats are:
1.       The rich don’t really behave like that; and
2.       In the long run, capital gains should approach 100 % of the stock’s value.

The Rich Don’t Really Behave Like That

 The first objection to my scenario under this heading is that the rich don’t invest the way I’ve described:   they invest more in (corporate and state) bonds.  One may also include property, but, again, Piketty notes that this is typically less than 10 % of the holdings of the rich and particularly the very rich.

The answer to this objection is that bonds simply don’t provide anywhere near the long-term return of stocks (more like 1-3 % percent after inflation), so that a 60-40 stock/bond split works out in practice to more like an 83-17 income split, and an 80-20 stock/bond split works out to more like a 92/8 income split, for a likely maximum of 1.6 % in additional taxes (on much less income!).  Moreover, most investors tend to prefer “tax-free” bonds, which pay no Federal tax at all – if the investor does this exclusively, the overall tax rate actually decreases, so a better guess of the effect is more like a 0.5 % tax rate increase.

Next up is the idea that the rich typically don’t instruct their funds to do modified LIFO (“last bought, first sold”), for other reasons that may or may not be valid (e.g., complicating tax preparation, tax consequences when the market is going steadily down).  In fact, the real-world case I draw on uses “average cost basis” for precisely those reasons. 

When most stocks in the portfolio of the rich person are pretty new, say, in their first two years of ownership (the “short run”), the answer to this objection is that the tax effect of average costing is pretty minimal:  about 1.6 average years of total increase works out to about a 17 % average increase in stock value and a 15/85 gains/no gains split.  With adjustments for the fact that some of these stocks would be sold anyway by the index fund, it works out to about a 40 % increase in capital gains taxes, from about a 0.30 % tax bite to a 0.42 % one – significant, but still leaving us well below 3 %.  Over the long run, this case involves capital gains approaching 100 % of the stock’s value, so I’ll discuss that part of the answer to this particular objection below.

The third objection to my scenario is that most rich people spend more per year on expenses than $250,000.  This is true; and yet the key figure here is actually the ratio of expenses to income.  As long as the ratio of expenses to income (about 0.2) in my scenario is the same as that of the average rich person, our analysis doesn’t alter in the slightest.

And the evidence appears to be, if anything, that as you consider richer and richer people, the ratio of expenses to income goes down.  By the time you reach $100 million, it would take buying a $10-million-dollar house every five years to approximate an 80/20 split.  By the time you reach $1 billion, nothing short of a $50-million political investment every two years would do.  And as that ratio dips, the percentage of income that must be paid in capital gains taxes goes downwards.  Assuming, for example, a $100 million fortune and $1.25 million per year in expenses, we are talking about capital gains taxes cut in half compared to our scenario.  Of course, at that point the deductions have much less effect, but the net effect is still to cut our “real-world” tax bite so far to well below 2 %.

In the Long Run, Capital Gains Should Approach the Stock’s Value

It would seem reasonable, given my scenario, that as the average stock in the rich person’s portfolio far surpasses its original value, the average ratio of original value to value now would approach zero.  Certainly, I have heard of cases in estates where “generation pass-through trust” stocks when sold had a cost basis of 1 or 2 % of the total, and therefore 98 or 99 % of the value of the stock was taxable as capital gains.

Let’s be more concrete.  To a first approximation, the person newly coming into a $10 million fortune in investment stocks is in his or her mid-40s, and has maybe 30 years to live.  What does the capital gains situation look like in the tax returns for his or her 75th and last year, and what therefore is the tax rate?

Under average costing, if we assumed no stocks have been sold in the meantime, the average stock has been in the portfolio for 30 years, with an average gain of about 1327 % (1.09 to the 30th power).  So 93 % of the portfolio would seem now to be capital gains. 

But, in fact, we know that stocks have been sold in the meantime, for expenses and (by the index fund manager) to keep up with the underlying index – about 21-22 % of income, meaning about 3.5 % of total stocks, year after year.  So, after 30 years, every stock in the portfolio has been sold an average of 1.17 times, and the actual age is more like 12.8 years, with an average gain of 300 %, so the actual percentage of the portfolio which is capital gains is now 75 %.  In turn, that means that the tax bite for capital gains is about 11% of total income and therefore the overall total tax rate has now climbed to 13.25 %.  

But wait, there’s more.  Our rich person is paying this rate in year 30.  Remember, his or her underlying goal is not to minimize tax rates at one period in time, but overall taxes throughout the 30-year time period.  And that means that we should consider the fact that in the 30th year, he is paying taxes on his capital gains an average of 15 years late.  To put it in terms of purchasing power, if we assume inflation of 1.5 % per year over that time period (typical of the last 10 years), he or she may be paying 11% of capital gains in tomorrow’s dollars, but that’s the same thing as paying less than 9 % in today’s dollars.  If we assume a more historical 3 % rate of inflation, it’s more like 6.5 % (although, for the rich person, the higher inflation is, the less the “real” income both before and after taxes).  And thus, the “real” overall tax rate is back down to about 8.75 %!    

What About When You Die?

Now, it would seem that when the rich person in my scenario died, the chickens come home to roost, or, to put it another way, most of the capital gains of the rich person are finally paid out in taxes, one way or another.  After 30 years, the final estate has grown to around $133 million (1.09 to the 30th power).  40 % of $122 million of that ($48.8 million), where a husband and wife are involved, would need to be sold to pay those taxes, plus enough to cover the 11 % capital gains tax rate on the sold stocks (about $5 million) and miscellaneous fees ($0.5 to 1 million).  So we have net taxes of about $54.5 million, or 41 % of the estate.  Add to this the previous 30 years of average 6% tax rates for stock income averaging about $6 million (0.3 times 30 times 6, or $5.4 million), and the total tax on capital gains would seem to be more like 45%.

Except for two things:  Factor 1, the “step-up in basis”, and Factor 2, the effects of inflation.

Factor 1:  Because of the recent law allowing stocks to be reset to show zero capital gains at the point of our rich person’s death, their heir(s) no longer need to pay that extra $5 million in taxes to cover capital gains taxes from selling stocks.  So now we’re back down to about 41 %. 

Factor 2:  We are paying that $55 million in tomorrow’s dollars – dollars 30 years on, to be precise.  Assuming a historical inflation of 3 % per year, those dollars are worth 41 cents in today’s terms.  And that, in turn, means that we are really down to an annual capital gains rate on actual income of 16.9 %.

In other words, after these two considerations, the rich person has really paid the government only 12 % more than the Federal capital gains tax rate.  And the clock is now reset to our original scenario, as the heirs pay less than 3 % of actual income on their next income tax bill on the remaining stocks. 

Contrast this, by the way, to the schmoes who earn between the median wage of about $60,000 and $200,000 per year.  Yes, they don’t have estate taxes, but indications are that they pay more than 20 % of real total income in income taxes – and then, if they save a fair amount for their old age, get taxed another few percentage points on their savings from any stock investments they have, which typically yield much less than 9 % per year anyway!

Advice and Horror

To close out, let’s return to the question of whether the average reader can approximate this.  As I see it, to pull this off a reader should do three things:
1.       Start off with sufficient net worth in investments in diversified and low-expense-ratio stocks (at a guesstimate, a minimum of $3-5 million) so that your yearly expenses can be about 2.5 % of that net worth.
2.       If you want to milk the last drop of profit from this scheme, arrange it, if possible, so that modified LIFO is your strategy for all stocks sold.
3.       Make sure that between 1 % and 4 % of your stocks are “churned” – sold and bought – each year while preserving diversification and a low expense ratio.  An index fund like a good S & P 500 one plus paying most of your expenses out of stocks are excellent ways to accomplish this.
At this point, I should remind the reader that the most important thing for him or her is not minimizing taxes, but maximizing net worth after taxes.  Things like low-cost index funds and reinvesting dividends are valuable because they help maximize net worth before taxes, and hence (because they have few or no tax effects per se) net worth after taxes.  As I said in my unpublished book on personal finance, the strategy that aims to maximize income is almost always better than the alternative strategy that tries to minimize taxes.

And finally, I just want to express my personal feeling of horror at the implications of this analysis.  It should take no reminding for readers to contrast this situation with income from wage work, which can be taxed at a typical rate of 5 to 10 times the rate for investments when your wage is between the median of $60,000 a year and, say, $200,000 a year.   And for what?  The rich don’t work harder on average, they spend much less of their income to contribute to an economy, thereby lowering the income of the rest of us, and if they’re top-level managers they also excessively squeeze wage income for the rest of us, as Piketty documents extensively. And in the long run, as Piketty also notes, by using some of their spare money to change politically the rules of the economic game in their favor, they raise the likelihood of serious recessions and depressions that harm all of us.

And there’s another apparent effect that really bothers me.  In my scenario, the government sees most capital gains income for the first time, not when the income is gained, but in the reporting for the estate, 30 years from now.  That means that, on average, in real terms, we see the real investment income of the rich 15 years after it occurs, when it has perhaps two-thirds of its actual value.  If this is so, we are seriously, seriously underestimating today’s income of the rich, the degree of income and wealth inequality in this society, and the degree to which this tax code is causing that inequality.

Caveat homo medianus!  Let the average person beware! 

Saturday, December 31, 2016

A Short Look Back at 2016

I have found very little in the last month and a half to add to previous posts.  CO2 continues its alarming rise, to what will in all likelihood be more than 406 ppm (yearly average) by end of year, and while global temperatures have ended their string of monthly records, we are still on course for a 1.2 degree C rise since 1850, about 0.3 degrees of it in the last 2 ½ years.  The big news in the Arctic (and Antarctic) is an unprecedented low in sea ice extent/area, plus record high temps in the Arctic in December – but that continues a smaller trend evident in most of the first half of 2016. 

Meanwhile, on the computing side, relatively little real-world innovation happened this year.  In-memory computing continued its steady rise in both performance and applicability, with smaller companies taking more of a lead as compared to 2015.  While blockchain technology and quantum computing made a big news splash early in the year, careful reading of “use cases” shows that real-world implementations of blockchain are thin on the ground or non-existent, as most companies try to figure out how best to make it work, while quantum computing is clearly far from real-world usefulness as of yet.

The big news in both areas, alas, is therefore the election of Donald Trump as President.  In the climate change area, as I predicted, he is proving to be an absolute disaster, with nominees for at least six posts that are climate change deniers with every incentive to make the American government a hindrance rather than a help in efforts to change “business as usual.” 

In the computing area, we see the spectacle of some large computing firms offering their services in public to Trump, an unprecedented move based on the calculation that while being seen as cooperative may not bring any benefits, failure to act in this way may cause serious problems for the firm.  Thus, we see Silicon Valley execs whose workforces are not at all enthused about Trump acting in meetings with him as if he offers new business opportunities, and IBM’s CEO announcing ways in which IBM technology can aid in achieving his presumed goals. 

It may seem odd to give such prominence to the personality of the President in assessing either climate change or the computing industry.  The fact is, however, that all of the moves I have cited are unprecedented, and derive from Trump’s personality.  To fail to consider this in assessing the likely long-run effects of the “new abnormal” in both the sustainability field and the computing industry is, imho, a failure to be an effective computer industry analyst.  And while no one likes a perpetually downbeat analyst, one that continually predicts rosy outcomes in this type of situation is simply not worth listening to.

I look back on 2016, and I see little that is permanent to celebrate – although the willingness of the media to begin to report on and accept climate change is, however temporary, worth noting.  I wish I could say that there is hope for better things in 2017; but as far as I can see, there isn’t.

Tuesday, November 15, 2016

Climate Change: The News Is Sadder Than You Think

Hopefully, any readers are aware of the likely climate-change implications of the election of Donald Trump to the US Presidency.  These include disengagement from international efforts to slow climate change, making government-driven change much more difficult; deregulation of “carbon pollution”, with predictable effects on the ability of wind and solar to replace rather than supplement oil; and effective barriers to any use of incentives (“carbon tax” or “carbon market”) to drive carbon-emissions reduction.  The effect on climate-change efforts is that China (hopefully) and Europe will have to drive them; which is a bit like trying to pedal a tricycle with one wheel gone.

And that doesn’t even include the likely effects of the new administration’s cuts in funding to NOAA, on whose metrics much of the world depends.  We have seen this before, in miniature, during the HW Bush years. 

But there is more sad news – much more – some of which I have learned very recently.  It concerns – well, let’s just go into the details.

Sea-Level Rise By Century’s End:  3 Feet and Rising

One recent factoid published, iirc, in, is that in the period from late 2014 to early 2016, oceanic water levels rose by 15 mm, i.e., at an annual rate of 10 mm – up from 3 mm per year before that.  Very little of that rise is due to el Nino, which began around Dec.-Jan. of this year and ended around April-May.  Instead, there is a clear connection to the sharp rise in global temperature which began in 2014. 

So let’s put this number (10 mm/year) in perspective.  If we guesstimate the first 14 years of the 21st century at 3 mm/year, then 42 mm are already banked.  To get to 909 mm (3 feet) will therefore require 86.7 years.  There are 86 years from 2015 to 2100.  So at today’s rates, we will reach 3 feet of sea level rise by some time in the year 2101.

In other words, 3 feet of sea level rise by 2100 – the prediction widely disseminated as late as a year ago – is already “baked in”.  And what reason do we have for thinking that matters will stop here?  There is no reason to expect global temperatures to decrease over the medium term, and good reason (atmospheric carbon increases) to expect them to continue to increase.  So we are talking sea level rise of over a foot by 2050, at the very least, and we are wondering if the new “more realistic” estimate of 6-9 feet of sea level rise may be too optimistic.

The Faster Rise of Atmospheric CO2 Isn’t Going Away – and Other Greenhouse Gases Are Following

Let’s start with the CO2 rise that I have been following since early this year.  The “baked in” (yearly average) amount of CO2 has reached, effectively, 405 ppm, as of October’s results about 1 ½ years after it passed 400 ppm permanently.  More alarmingly, the surge caused at least partly by el Nino is not going away.  I have been following CO2 measurements from Mauna Loa for about 5 years, and always before this year the 10-year rate of increase has been a little more than 1.0 ppm per year.  The el Nino and follow-on has added almost 3 ppm to that rate, making a rate of almost 1.1 ppm per year.  And the new rate of increase shows little signs of stopping, so that we can project a rate of 1.2 ppm within the next 5 years -- a 20 percent increase.

Global Warming Appears to Follow – Right Now

A second measurement of greenhouse gases in the atmosphere – this one including all greenhouse gases, e.g., methane – has recently been discussed in  It now stands at over 483 ppm.  Some caution must be used in assessing this figure, since we have no figure for 1850 and the years following.  However, we can make a rough guesstimate, based on the facts that natural methane production would then have been lower than now, man-made methane production almost non-existent, and as of the early 2010s human methane production was approximately equivalent to natural methane production. 

This suggests that at least 2/3 of the gap between 400-405 ppm of carbon and 483 ppm of greenhouse gases as a whole is due to increases in human-caused non-CO2 greenhouse gas emissions.  To put it another way, human activities that have caused the 145-odd ppm increases in CO2 have also caused at least 2/3 of the 80-odd ppm difference between today’s CO2 atmospheric ppm and today’s total greenhouse gas atmospheric ppm.

This is consistent with Hansen’s thesis that a doubling of carbon ppm in the atmosphere would lead to 4 degrees C global warming, not 2 degrees – the additional warming comes partly from the aforementioned non-CO2 greenhouse gases, and partly from the additional “stored heat” at the Earth’s surface due to changed albedo as snow melts.  The reason for the difference between the two estimates, afaik, is that no one knew just how fast this additional warming would occur, since there is (and was, 55 million and 250 million years ago) clearly a lag time between large atmospheric carbon rises and increased non-CO2 greenhouse gases and “stored heat”. 

In summary, not only is the rise of atmospheric CO2 speeding up, but the effect that appears to have a much shorter lag time than we thought.  And so, there is real reason to fear that the global warming we are afraid of (presently estimated at 1.2-1.3 ppm above 1850 already) will in the near and intermediate term increase faster than we thought.

The Arctic Sea Melt Resumes

As late as a month ago, it was possible to argue that the ongoing melting away of Arctic sea ice was still on the “pause” it had been on since 2012.  And then, surprisingly, the usual refreezing occurring during October stopped.  It stopped for several weeks.  As a result, the average Arctic sea ice volume year-round reached a record low, and kept going.  And going.  As of the beginning of November, the average is now far below all previous years.

What caused this sudden stoppage?  Apparently, primarily unprecedented October oceanic and atmospheric heat in the areas where refreeze typically occurs in late October.  This is apparently much the same reason that minimum Arctic volume by some measures reached a new low in late September, despite a melting season with weather that in all previous years had resulted in much less melt than usual.

And what will prevent this from happening next year, and the year after?  Nothing, apparently (note that the 2016 el Nino had little or no effect on Arctic sea ice melt).  It now appears that we are still facing a September “melt-out” by the mid-2030s, at best.  I am happy that the direst predictions (2018 and thereabouts) are almost certainly not going to happen; and yet the scientific consensus has gone from “melt-out at 2100 only if we continue business as usual” to “melt-out around 2035 with much less chance of avoidance” in the last 5 years, and I hope against hope, given everything else that’s happening, that the forecast doesn’t slip again.

The Bottom Line:   Agility, Not Flexibility

I find, at the end, that I need to re-emphasize my concern, not just about the first 4 degrees C of temperature rise, but even more so the next, and the next.  And the next after that, the final rise.

I find that that a lot of people discount scientific warnings about loss of food production and so on, reasoning that the global market can, as it has done in the past, adapt to these problems at little cost, by using less water-intensive methods of farming, shifting farming production allocations north (and south) as the temperature increases, and handling disaster costs as usual while doing quick fixes on existing facilities to adapt.  In other words, our system is highly flexible; flexible enough to get us through the next 40-50 years, I think, while providing food for the developed world and probably the large majority of humanity. 

But, as a systems analyst will tell you, such a flexible system tends to make the inevitable crash far worse.  Patching existing processes (in climate change terms, adaptation) rather than fixing the problem at its root (in climate change terms, mitigation) causes over-investment in the present system that makes changing to a new, needed one far more costly – and therefore, far more likely to deep-six the company, or, in this case, the global market.

At a certain point, the average of national markets going south passes the average of global markets still growing, and then the cycle starts running in reverse:  smaller and smaller markets that can be serviced at higher and higher costs, with food scarcer and scarcer.  The only way to avoid total collapse into a system with inefficient production of the 1/10 of the food necessary for the survival of 9 billion people is mitigation; but the cost to do that is 10 times, 100 times what it was.  The result is brutal military dictatorships where the commander is the main rich person, as has happened so often throughout history.  Today’s rich will suffer less, because they make accommodations with the military; but they will on average suffer severely, by famine and disease. 

An agile system (here I am speaking about the ideal, not today’s usually far-from-agile companies) anticipates this eventuality, and moves far more rapidly towards fundamental change.  It can be done.  But, as of now, we are headed in the opposite direction.

Tuesday, October 25, 2016

The Cult of the Algorithm: Not So Fast, Folks

Sometimes I feel like Emily Litella in the old Saturday Night Live skit, huffing and puffing in offense while everyone wonders what I’m talking about.  That’s particularly true in the case of new uses of the word “algorithm.”  I find this in an interview by NPR of Cathy O’Neil, author of “Weapons of Math Destruction”, where part of the conversation is “We have these algorithms … we don’t know what they are under the hood … They don’t say, oh, I wonder why this algorithm is excluding women”.  I find this in the Fall 2016 Sloan Management Review, where one commenter says “developer ‘managers’ provide feedback to the workers in the form of tweaks to their programs or algorithms … the algorithms themselves are sometimes the managers of human workers.” 
As a once-upon-a-time computer scientist, I object.  I not only object, I assert that this is fuzzy thinking that will lead us to ignore the elephant in the living room of problems in the modeling of work/management/etc. to focus on the gnat on the porch of developer creation of software.
But how can I possibly say that a simple misuse of one computer term can have such large effects?  Well, let’s start by understanding (iirc) what an algorithm is.

The Art of the Algorithm

As I was taught it at Cornell back in the ‘70s (and I majored in Theory of Algorithms and Computing), an algorithm is an abstraction of a particular computing task or function that allows us to identify the best (i.e., usually, the fastest) way of carrying out that task/function, on average, in the generality of cases.  The typical example of an algorithm is one for carrying out a “sort”, whether that means sorting numbers from lowest to highest or sorting words alphabetically (theoretically, they are much the same thing), or any other variant.  In order to create an algorithm, one breaks down the sort into unitary abstract computing operations (e.g., add, multiply, compare), assigns costs to each, and then specifies the steps (do this, then do this).  Usually it turns out that one operation costs more than the others, and so sort algorithms can be reduced to considering the overall number of compares n as n increases from one to infinity.
Now consider a particular algorithm for sorting.  It runs like this:  Suppose I have 100 numbers to sort.  Take the first number in line, compare it to all the others, determine that it is the 23rd lowest.  Do the same for the second, third, … 100th number.  At the end, for any n, I will have an ordered, sorted list of numbers, no matter how jumbled the numbers handed to me are.
This is a perfectly valid algorithm.  It is also a bad algorithm.  For every 100 numbers, it requires at least 100 squared steps, and for any n, it requires at least n squared steps.  We say that this is an “order of n squared” or O(n**2) algorithm.  But now we know what to look for, so we take a different approach.
Here it is:  We go through the list of 100 numbers from both ends, and we find the maximum of the numbers from the low end and the minimum of the numbers from the high end, stopping when both sets of comparisons are dealing with the same number.  We then partition the list into two buckets, one containing the list up to that number (all of whose items are guaranteed to be less than that number), and one containing the list after that number (all of whose items are guaranteed to be greater than that number.  We repeat the process until we have reached buckets containing one number.  On average, there will be O(logarithm to the base two, or “log” of n) such splits, and each level of split performs O(n) comparisons.  So the average number of comparisons in this sorting algorithm is O(n times log n), called O(n log n) for short, which is way better than O(n**2) and explains why the algorithm is now called Quicksort.
Notice one thing about this:  finding a good algorithm for a function or task says absolutely nothing about whether that function or task makes sense in the real world.  What does the heavy lifting of creating a new program that is useful is more along the lines of a “model”, implicit in the mind of the company or person driving development, or additionally explicit in the program software actually carrying out the model.   An algorithm doesn’t say “do this”; it says, “if you want to do this, here’s the fastest way to do it.”

Algorithms and the Real World

So why do algorithms matter in the real world?  After all, any newbie programmer can write a program using the Quicksort algorithm, and there are a huge mass of algorithms available for public study in computer-science journals and the like.  The answer, I believe, lies in copyright and patent law.  Here, again, I know somewhat of the subject, because my father was a professor of copyright law and I held some conversations with him as he grappled with how copyright law should deal with computer software, and also because in the ‘70s I did a little research into the possibility of getting a patent on one of my ideas (it was later partially realized by Thinking Machines).
To understand how copyright and patent law can make algorithms matter, imagine that you are Google, 15 or so years ago.  You have a potential competitive advantage in your programs that embody your search engine, but what you would really like is to turn that temporary competitive advantage into a more permanent one, by patenting some of the code (not to mention copyrighting it to prevent disgruntled employees from using it in their next job).  However, patent law requires that this be a significant innovation.  Moreover, if someone just looks at what the program does and figures out how to mimic it with another search engine set of programs (a process called “reverse engineering”), then that does not violate your patent.
However, suppose you come up with a new algorithm?  In that case, you have a much stronger case for the program embodying that algorithm being a significant innovation (because your program is faster and [usually] therefore can handle many more petabytes or thousands of users), and the job of reverse engineering the program becomes much harder, because the new algorithm is your “secret sauce”. 
That means, if you are Google, that your new algorithm becomes the biggest secret of all, the piece of code you are least likely to share with the outside world – outsiders can’t figure out what is going on.  And all the programs written using the new algorithm likewise become much more “impenetrable”, even to many of the developers writing them.  It’s not just a matter of complexity; it’s a matter of preserving some company’s critical success factor.  Meanwhile, you (Google) are seeing if this new algorithm leads to another new algorithm – and that compounds the advantage and secrecy.
Now, let me pause here to note that I really believe that much of this problem is due to the way patent and copyright law adapted to the advent of software.  In the case of patent law, the assumption used to be that patents were on physical objects, and even if it was the idea that was new, the important thing was that the inventor could offer a physical machine or tool to allow people to use the invention.  However, software is “virtual” or “meta” – it can be used to guide many sorts of machines or tools, in many situations; at its best, it is fact a sort of “Swiss Army knife”.  Patent law has acted as if each program was physical, and therefore what mattered was the things the program did that hadn’t been done before – whereas if the idea was what mattered, as it does in software, then a new algorithm or new model should be what is patentable, not “the luck of tackling a new case”.
Likewise, in copyright law, matters were set up so that composers, writers, and the companies that used them had a right to be paid for any use of material that was original – it’s plagiarism that matters.  In software, it’s extremely easy to write a piece of a program that is effectively identical to what someone else has written, and that’s a Good Thing.  By granting copyright to programs that just happened to be the first time someone had written code in that particular way, and punishing those who (even if they steal code from their employer) could very easily have written that code on their own, copyright law can fail to focus on truly original, creative work, which typically is associated with new algorithms.
[For those who care, I can give an example from my own experience.  At Computer Corp. of America, I wrote a program that incorporated an afaik new algorithm that let me take a page’s worth of form fields and turn it into a good imitation of a character-at-a-time form update.  Was that patentable?  Probably, and it should have been.  Then I wrote a development tool that allowed users to drive development by user-facing screens, program data, or the functions to be coded in the same general way – “have it your way” programming.  Was that patentable? Probably.  Should it have been?  Probably not:  the basic idea was already out there, I just happened to be the first to do it.]

It's About the Model, Folks

Now let’s take another look at the two examples I cited at the beginning of this post.  In the NPR interview, O’Neil is really complaining that she can’t get a sense of what the program actually does.  But why does she need to see inside a program or an “algorithm” to do that?  Why can’t she simply have access to an abstraction of the program that tells her what the program does in particular cases?
In point of fact, there are plenty of such tools.  They are software design tools, and they are perfectly capable of spitting out a data model that includes outputs for any given input.  So why can’t Ms. O’Neil use one of those? 
The answer, I submit, is that companies developing the software she looks at typically don’t use those design tools, explicitly or implicitly, to create programs.  A partial exception to this is in the case of agile development.  Really good agile development is based on an ongoing conversation with users leading to ongoing refinement of code – not just execs in the developing company and execs in the company you’re selling the software to, but ultimate end users.  And one of the things that a good human resources department and the interviewee want to know is exactly what the criteria are for hiring, and why they are valid.  In other words, they want a model of the program that tells them what they want to know, not dense thickets of code or even of code abstractions (including algorithms).
My other citation seems to go to the opposite extreme:  to assume that automation of a part of the management task using algorithms reflects best management practices automagically, as old hacker jargon would put it.  But we need to verify this, and the best way, again, is to offer a design model, in this case of the business process involved.  Why doesn’t the author realize this?  My guess is that he or she assumes that the developer will somehow look at the program or algorithm and figure this out.  And my guess is that he/she would be wrong, because often the program involves code written by another programmer, about which this programmer knows only the correct inputs to supply, and the algorithms are also often Deep Dark Secrets.
Notice how a probably wrong conception of what an algorithm is has led to attaching great importance to the algorithm involved, and little to the model embodied by the program in which the algorithm occurs.  As a result, O’Neil appears to be pointing the finger of blame at some ongoing complexity that has grown like Topsy, rather than at the company supplying the software for failing to practice good agile development.  Likewise, the other cite’s belief in the magical power of the algorithm has led him/her to ignore the need to focus on the management-process model in order to verify the assumed benefits.  As I said in the beginning, they are focusing on the gnat of the algorithm and ignoring the elephant of the model embodied in the software.

Action Items

So here’s how such misapprehensions play out for vendors on a grand scale (quote from an Isabella Kaminska article excerpted in Prof. deLong’s blog,  “You will have heard the narrative.... Automation, algorithms and robotics... means developed countries will soon be able to reshore all production, leading to a productivity boom which leads to only one major downside: the associated loss of millions of middle class jobs as algos and robots displace not just blue collar workers but the middle management and intellectual jobs as well. Except... there’s no quantifiable evidence anything like that is happening yet.”  And why should there be?  In the real world, a new algorithm usually automates nothing (it’s the program using it that does the heavy lifting) and the average algorithm does little except give one software vendor a competitive advantage over others.
Vendors of, and customers for, this type of new software product therefore have an extra burden:  ensuring that these products deliver, as far as possible, only verifiable benefits for the ultimate end user.  This is especially true of products that can have a major negative impact on these end users, such as hiring/firing software and self-driving cars.  In these cases, it appears that there may be legal risks, as well:  A vendor defense of “It’s too complex to explain” may very well not fly when there are some relatively low-cost ways of providing the needed information to the customer or end user, and corporate IT customers are likewise probably not shielded from end user lawsuits by a “We didn’t ask” defense.
Here are some action items that have in the past shown some usefulness in similar cases:
·         Research software design tools, and if possible use them to implement a corporate IT standard of providing documentation at the “user API” level specifying in comprehensible terms the outputs for each class of input and why.
·         Adopt agile development practices that include consideration and documentation of the interests of the ultimate end users.
·         Create an “open user-facing API” for the top level of end-user-critical programs, that allows outside developers to (as an intermediary) understand what’s going on, and as a side-benefit to propose and vet extensions to these programs.  Note that in the case of business-critical algorithms, this trades a slight increase in the risk of reverse engineering for a probable larger increase in customer satisfaction and innovation speedup.
Above all, stop deifying and misusing the word “algorithm.”  It’s a good word for understanding a part of the software development process, when properly used.  When improperly used – well, you’ve seen what I think the consequences are and will be.