Sunday, November 12, 2017

How the Rich Can Pay Only 3 % of Their Actual Income in Taxes – Without Tricks


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
No, it’s not from hiding the money overseas, or tax-saving tricks that skirt the outer edges of US legality.  It’s from the peculiarities of the way the tax code handles capital gains taxes.  And a recent change to the estate laws to allow a “step-up in basis” is a big part of it.  Can you, the average reader, take full advantage of it?  Only if (a) much of your net worth is in stocks (preferably low-expense index funds), and (b) 9% times your total stock value is much greater than your yearly expenses.

What follows is an explanation using a “typical case” approximating the real-world experience of someone I know.

The Idle Rich

Let us suppose that you have net worth of $10 million, entirely invested in a Vanguard or Fidelity S & P 500 index fund (expense ratio 0.1 %), with little or no work income and expenses of about $250,000 per year.  You reinvest dividends immediately back into the index fund.  Studies suggest that over the long term such an investment increases by 10.85 % per year.  If we subtract the expense ratio from that, you may expect actual income from this investment to increase by 10.75 % every year, or $1,075,000.  Presto, you are a millionaire this year, without working a day for it.

Now you need to pay income taxes – and here’s where things get complicated.   The money comes to you in two forms:  (1) dividends, which you reinvest, and (2) “capital gains” (all the rest).  My comparison of the S & P 500 total return index (which includes dividend reinvestment) to the S & P 500 index we all see on the financial pages suggests that in the long run, dividends increase your investment by 2.25 % per year ($225,000), leaving 8.5 % ($850,000) from capital gains.  The tax code handles these two cases differently.

Dividends

Your Federal tax bite from dividends is pretty straightforward:  15 % (let’s assume you live in a state without dividend and capital gains taxes of its own; note also that at about $25 million in investment wealth it kicks up to 20%).  So on the order of 2.25 % of your income tax for this year is at this 15 % rate, or about 2.25 % of your actual income (about $27,000).

Capital Gains

Capital gains are defined as the amount your stocks in that index fund have appreciated since you first bought them.  To maximize the yearly gain in net worth (for reasons too complicated to explain here), you are going to pay your expenses of $250,000 by selling stocks from your index fund. 

Let’s assume, for the sake of simplicity, that all your stocks are effectively just over 1 year old – as in the case where this is a new inheritance (explained later).  You typically have three choices of what you’re going to tell your index manager to sell: “first in, first out [FIFO]”, “average cost [basis]”, or “modified last in, first out [LIFO]” (you won’t necessarily see FIFO and LIFO presented that way).  FIFO, which really means “first bought, first sold”, is a no-brainer:  that means the oldest stocks with the biggest capital gains get sold first, so you typically don’t want to do that.  Average cost” is, as it seems, halfway between worst and best in most cases.  LIFO means “first bought, first sold”, and modified LIFO means that you actively go in when you sell from your portfolio of stocks or index fund and arrange to not sell stocks bought less than a year ago – because otherwise your capital gains tax just about doubles.  In our simple case, all three selling approaches get the same result.

Your capital gain on your $250,000 of yearly expenses is about 9 % (0.1075/1.1075).  A Federal tax rate of 15 % therefore leaves you taxed about an additional 0.14%, on top of the 2.25 % from dividend income.

Finally, index funds steadily buy and sell stocks throughout the years on their own hook, in order to reflect stocks entering and leaving the index.  In the case of the S & P 500, a typical year might see 10 stocks “turn over” like this, usually the ones least capitalized, for a rough guesstimate of 1 % of the index’s total value.  That’s a straightforward additional 0.15 %.

Total

Your total tax bite from your actual total income, therefore, is about 2.55 % of $1,075,000, or about $27,400.  However, this now flows into gross income, which is then “adjusted” for various tax breaks and then converted to “taxable income” via the usual standard or itemized deductions.  The typical rich household (married filing jointly) has at least $12,700 from these sources – so the net is at or below $14,700 (more like 1.45 of actual income %!).  Over the long term, this will creep up as the average stock gets “older”, but I will anticipate that discussion below and state that it effectively stays below 5 % for quite a long time.

[Note:  Here I don’t discuss the recently-added Net Investment Income Tax of 3.8 %, which appears to apply only to much richer individuals]

Thomas Piketty in his book Capitalism in the Twenty-First Century points out that the top 0.1 % of the US income distribution (which roughly corresponds, I believe, to $10 million and up in net worth) has the bulk of their net worth not in land or other assets, but in investments, mainly in stocks.  I conclude that, therefore, we may expect the savvy rich person to pay perhaps 4 % of his or her actual income – not the income declared in tax returns, but with the additional income from untaxed capital gains added back in – in income taxes this year.

In the Long Run, The Rich Are Dead, But They Still Don’t Pay Much Income Tax

There are two caveats that do apply to my scenario, which indeed drive the income tax rate of the rich higher.  However, on closer examination, they don’t really increase the tax rates of the rich that much.  These two caveats are:
1.       The rich don’t really behave like that; and
2.       In the long run, capital gains should approach 100 % of the stock’s value.

The Rich Don’t Really Behave Like That

 The first objection to my scenario under this heading is that the rich don’t invest the way I’ve described:   they invest more in (corporate and state) bonds.  One may also include property, but, again, Piketty notes that this is typically less than 10 % of the holdings of the rich and particularly the very rich.

The answer to this objection is that bonds simply don’t provide anywhere near the long-term return of stocks (more like 1-3 % percent after inflation), so that a 60-40 stock/bond split works out in practice to more like an 83-17 income split, and an 80-20 stock/bond split works out to more like a 92/8 income split, for a likely maximum of 1.6 % in additional taxes (on much less income!).  Moreover, most investors tend to prefer “tax-free” bonds, which pay no Federal tax at all – if the investor does this exclusively, the overall tax rate actually decreases, so a better guess of the effect is more like a 0.5 % tax rate increase.

Next up is the idea that the rich typically don’t instruct their funds to do modified LIFO (“last bought, first sold”), for other reasons that may or may not be valid (e.g., complicating tax preparation, tax consequences when the market is going steadily down).  In fact, the real-world case I draw on uses “average cost basis” for precisely those reasons. 

When most stocks in the portfolio of the rich person are pretty new, say, in their first two years of ownership (the “short run”), the answer to this objection is that the tax effect of average costing is pretty minimal:  about 1.6 average years of total increase works out to about a 17 % average increase in stock value and a 15/85 gains/no gains split.  With adjustments for the fact that some of these stocks would be sold anyway by the index fund, it works out to about a 40 % increase in capital gains taxes, from about a 0.30 % tax bite to a 0.42 % one – significant, but still leaving us well below 3 %.  Over the long run, this case involves capital gains approaching 100 % of the stock’s value, so I’ll discuss that part of the answer to this particular objection below.

The third objection to my scenario is that most rich people spend more per year on expenses than $250,000.  This is true; and yet the key figure here is actually the ratio of expenses to income.  As long as the ratio of expenses to income (about 0.2) in my scenario is the same as that of the average rich person, our analysis doesn’t alter in the slightest.

And the evidence appears to be, if anything, that as you consider richer and richer people, the ratio of expenses to income goes down.  By the time you reach $100 million, it would take buying a $10-million-dollar house every five years to approximate an 80/20 split.  By the time you reach $1 billion, nothing short of a $50-million political investment every two years would do.  And as that ratio dips, the percentage of income that must be paid in capital gains taxes goes downwards.  Assuming, for example, a $100 million fortune and $1.25 million per year in expenses, we are talking about capital gains taxes cut in half compared to our scenario.  Of course, at that point the deductions have much less effect, but the net effect is still to cut our “real-world” tax bite so far to well below 2 %.

In the Long Run, Capital Gains Should Approach the Stock’s Value

It would seem reasonable, given my scenario, that as the average stock in the rich person’s portfolio far surpasses its original value, the average ratio of original value to value now would approach zero.  Certainly, I have heard of cases in estates where “generation pass-through trust” stocks when sold had a cost basis of 1 or 2 % of the total, and therefore 98 or 99 % of the value of the stock was taxable as capital gains.

Let’s be more concrete.  To a first approximation, the person newly coming into a $10 million fortune in investment stocks is in his or her mid-40s, and has maybe 30 years to live.  What does the capital gains situation look like in the tax returns for his or her 75th and last year, and what therefore is the tax rate?

Under average costing, if we assumed no stocks have been sold in the meantime, the average stock has been in the portfolio for 30 years, with an average gain of about 1327 % (1.09 to the 30th power).  So 93 % of the portfolio would seem now to be capital gains. 

But, in fact, we know that stocks have been sold in the meantime, for expenses and (by the index fund manager) to keep up with the underlying index – about 21-22 % of income, meaning about 3.5 % of total stocks, year after year.  So, after 30 years, every stock in the portfolio has been sold an average of 1.17 times, and the actual age is more like 12.8 years, with an average gain of 300 %, so the actual percentage of the portfolio which is capital gains is now 75 %.  In turn, that means that the tax bite for capital gains is about 11% of total income and therefore the overall total tax rate has now climbed to 13.25 %.  

But wait, there’s more.  Our rich person is paying this rate in year 30.  Remember, his or her underlying goal is not to minimize tax rates at one period in time, but overall taxes throughout the 30-year time period.  And that means that we should consider the fact that in the 30th year, he is paying taxes on his capital gains an average of 15 years late.  To put it in terms of purchasing power, if we assume inflation of 1.5 % per year over that time period (typical of the last 10 years), he or she may be paying 11% of capital gains in tomorrow’s dollars, but that’s the same thing as paying less than 9 % in today’s dollars.  If we assume a more historical 3 % rate of inflation, it’s more like 6.5 % (although, for the rich person, the higher inflation is, the less the “real” income both before and after taxes).  And thus, the “real” overall tax rate is back down to about 8.75 %!    

What About When You Die?

Now, it would seem that when the rich person in my scenario died, the chickens come home to roost, or, to put it another way, most of the capital gains of the rich person are finally paid out in taxes, one way or another.  After 30 years, the final estate has grown to around $133 million (1.09 to the 30th power).  40 % of $122 million of that ($48.8 million), where a husband and wife are involved, would need to be sold to pay those taxes, plus enough to cover the 11 % capital gains tax rate on the sold stocks (about $5 million) and miscellaneous fees ($0.5 to 1 million).  So we have net taxes of about $54.5 million, or 41 % of the estate.  Add to this the previous 30 years of average 6% tax rates for stock income averaging about $6 million (0.3 times 30 times 6, or $5.4 million), and the total tax on capital gains would seem to be more like 45%.

Except for two things:  Factor 1, the “step-up in basis”, and Factor 2, the effects of inflation.

Factor 1:  Because of the recent law allowing stocks to be reset to show zero capital gains at the point of our rich person’s death, their heir(s) no longer need to pay that extra $5 million in taxes to cover capital gains taxes from selling stocks.  So now we’re back down to about 41 %. 

Factor 2:  We are paying that $55 million in tomorrow’s dollars – dollars 30 years on, to be precise.  Assuming a historical inflation of 3 % per year, those dollars are worth 41 cents in today’s terms.  And that, in turn, means that we are really down to an annual capital gains rate on actual income of 16.9 %.

In other words, after these two considerations, the rich person has really paid the government only 12 % more than the Federal capital gains tax rate.  And the clock is now reset to our original scenario, as the heirs pay less than 3 % of actual income on their next income tax bill on the remaining stocks. 

Contrast this, by the way, to the schmoes who earn between the median wage of about $60,000 and $200,000 per year.  Yes, they don’t have estate taxes, but indications are that they pay more than 20 % of real total income in income taxes – and then, if they save a fair amount for their old age, get taxed another few percentage points on their savings from any stock investments they have, which typically yield much less than 9 % per year anyway!

Advice and Horror

To close out, let’s return to the question of whether the average reader can approximate this.  As I see it, to pull this off a reader should do three things:
1.       Start off with sufficient net worth in investments in diversified and low-expense-ratio stocks (at a guesstimate, a minimum of $3-5 million) so that your yearly expenses can be about 2.5 % of that net worth.
2.       If you want to milk the last drop of profit from this scheme, arrange it, if possible, so that modified LIFO is your strategy for all stocks sold.
3.       Make sure that between 1 % and 4 % of your stocks are “churned” – sold and bought – each year while preserving diversification and a low expense ratio.  An index fund like a good S & P 500 one plus paying most of your expenses out of stocks are excellent ways to accomplish this.
At this point, I should remind the reader that the most important thing for him or her is not minimizing taxes, but maximizing net worth after taxes.  Things like low-cost index funds and reinvesting dividends are valuable because they help maximize net worth before taxes, and hence (because they have few or no tax effects per se) net worth after taxes.  As I said in my unpublished book on personal finance, the strategy that aims to maximize income is almost always better than the alternative strategy that tries to minimize taxes.

And finally, I just want to express my personal feeling of horror at the implications of this analysis.  It should take no reminding for readers to contrast this situation with income from wage work, which can be taxed at a typical rate of 5 to 10 times the rate for investments when your wage is between the median of $60,000 a year and, say, $200,000 a year.   And for what?  The rich don’t work harder on average, they spend much less of their income to contribute to an economy, thereby lowering the income of the rest of us, and if they’re top-level managers they also excessively squeeze wage income for the rest of us, as Piketty documents extensively. And in the long run, as Piketty also notes, by using some of their spare money to change politically the rules of the economic game in their favor, they raise the likelihood of serious recessions and depressions that harm all of us.

And there’s another apparent effect that really bothers me.  In my scenario, the government sees most capital gains income for the first time, not when the income is gained, but in the reporting for the estate, 30 years from now.  That means that, on average, in real terms, we see the real investment income of the rich 15 years after it occurs, when it has perhaps two-thirds of its actual value.  If this is so, we are seriously, seriously underestimating today’s income of the rich, the degree of income and wealth inequality in this society, and the degree to which this tax code is causing that inequality.

Caveat homo medianus!  Let the average person beware! 

Saturday, December 31, 2016

A Short Look Back at 2016


I have found very little in the last month and a half to add to previous posts.  CO2 continues its alarming rise, to what will in all likelihood be more than 406 ppm (yearly average) by end of year, and while global temperatures have ended their string of monthly records, we are still on course for a 1.2 degree C rise since 1850, about 0.3 degrees of it in the last 2 ½ years.  The big news in the Arctic (and Antarctic) is an unprecedented low in sea ice extent/area, plus record high temps in the Arctic in December – but that continues a smaller trend evident in most of the first half of 2016. 

Meanwhile, on the computing side, relatively little real-world innovation happened this year.  In-memory computing continued its steady rise in both performance and applicability, with smaller companies taking more of a lead as compared to 2015.  While blockchain technology and quantum computing made a big news splash early in the year, careful reading of “use cases” shows that real-world implementations of blockchain are thin on the ground or non-existent, as most companies try to figure out how best to make it work, while quantum computing is clearly far from real-world usefulness as of yet.

The big news in both areas, alas, is therefore the election of Donald Trump as President.  In the climate change area, as I predicted, he is proving to be an absolute disaster, with nominees for at least six posts that are climate change deniers with every incentive to make the American government a hindrance rather than a help in efforts to change “business as usual.” 

In the computing area, we see the spectacle of some large computing firms offering their services in public to Trump, an unprecedented move based on the calculation that while being seen as cooperative may not bring any benefits, failure to act in this way may cause serious problems for the firm.  Thus, we see Silicon Valley execs whose workforces are not at all enthused about Trump acting in meetings with him as if he offers new business opportunities, and IBM’s CEO announcing ways in which IBM technology can aid in achieving his presumed goals. 

It may seem odd to give such prominence to the personality of the President in assessing either climate change or the computing industry.  The fact is, however, that all of the moves I have cited are unprecedented, and derive from Trump’s personality.  To fail to consider this in assessing the likely long-run effects of the “new abnormal” in both the sustainability field and the computing industry is, imho, a failure to be an effective computer industry analyst.  And while no one likes a perpetually downbeat analyst, one that continually predicts rosy outcomes in this type of situation is simply not worth listening to.

I look back on 2016, and I see little that is permanent to celebrate – although the willingness of the media to begin to report on and accept climate change is, however temporary, worth noting.  I wish I could say that there is hope for better things in 2017; but as far as I can see, there isn’t.

Tuesday, November 15, 2016

Climate Change: The News Is Sadder Than You Think


Hopefully, any readers are aware of the likely climate-change implications of the election of Donald Trump to the US Presidency.  These include disengagement from international efforts to slow climate change, making government-driven change much more difficult; deregulation of “carbon pollution”, with predictable effects on the ability of wind and solar to replace rather than supplement oil; and effective barriers to any use of incentives (“carbon tax” or “carbon market”) to drive carbon-emissions reduction.  The effect on climate-change efforts is that China (hopefully) and Europe will have to drive them; which is a bit like trying to pedal a tricycle with one wheel gone.

And that doesn’t even include the likely effects of the new administration’s cuts in funding to NOAA, on whose metrics much of the world depends.  We have seen this before, in miniature, during the HW Bush years. 

But there is more sad news – much more – some of which I have learned very recently.  It concerns – well, let’s just go into the details.

Sea-Level Rise By Century’s End:  3 Feet and Rising


One recent factoid published, iirc, in thinkprogress.org/tagged/climate, is that in the period from late 2014 to early 2016, oceanic water levels rose by 15 mm, i.e., at an annual rate of 10 mm – up from 3 mm per year before that.  Very little of that rise is due to el Nino, which began around Dec.-Jan. of this year and ended around April-May.  Instead, there is a clear connection to the sharp rise in global temperature which began in 2014. 

So let’s put this number (10 mm/year) in perspective.  If we guesstimate the first 14 years of the 21st century at 3 mm/year, then 42 mm are already banked.  To get to 909 mm (3 feet) will therefore require 86.7 years.  There are 86 years from 2015 to 2100.  So at today’s rates, we will reach 3 feet of sea level rise by some time in the year 2101.

In other words, 3 feet of sea level rise by 2100 – the prediction widely disseminated as late as a year ago – is already “baked in”.  And what reason do we have for thinking that matters will stop here?  There is no reason to expect global temperatures to decrease over the medium term, and good reason (atmospheric carbon increases) to expect them to continue to increase.  So we are talking sea level rise of over a foot by 2050, at the very least, and we are wondering if the new “more realistic” estimate of 6-9 feet of sea level rise may be too optimistic.

The Faster Rise of Atmospheric CO2 Isn’t Going Away – and Other Greenhouse Gases Are Following


Let’s start with the CO2 rise that I have been following since early this year.  The “baked in” (yearly average) amount of CO2 has reached, effectively, 405 ppm, as of October’s results about 1 ½ years after it passed 400 ppm permanently.  More alarmingly, the surge caused at least partly by el Nino is not going away.  I have been following CO2 measurements from Mauna Loa for about 5 years, and always before this year the 10-year rate of increase has been a little more than 1.0 ppm per year.  The el Nino and follow-on has added almost 3 ppm to that rate, making a rate of almost 1.1 ppm per year.  And the new rate of increase shows little signs of stopping, so that we can project a rate of 1.2 ppm within the next 5 years -- a 20 percent increase.

Global Warming Appears to Follow – Right Now


A second measurement of greenhouse gases in the atmosphere – this one including all greenhouse gases, e.g., methane – has recently been discussed in neven1.typepad.com.  It now stands at over 483 ppm.  Some caution must be used in assessing this figure, since we have no figure for 1850 and the years following.  However, we can make a rough guesstimate, based on the facts that natural methane production would then have been lower than now, man-made methane production almost non-existent, and as of the early 2010s human methane production was approximately equivalent to natural methane production. 

This suggests that at least 2/3 of the gap between 400-405 ppm of carbon and 483 ppm of greenhouse gases as a whole is due to increases in human-caused non-CO2 greenhouse gas emissions.  To put it another way, human activities that have caused the 145-odd ppm increases in CO2 have also caused at least 2/3 of the 80-odd ppm difference between today’s CO2 atmospheric ppm and today’s total greenhouse gas atmospheric ppm.

This is consistent with Hansen’s thesis that a doubling of carbon ppm in the atmosphere would lead to 4 degrees C global warming, not 2 degrees – the additional warming comes partly from the aforementioned non-CO2 greenhouse gases, and partly from the additional “stored heat” at the Earth’s surface due to changed albedo as snow melts.  The reason for the difference between the two estimates, afaik, is that no one knew just how fast this additional warming would occur, since there is (and was, 55 million and 250 million years ago) clearly a lag time between large atmospheric carbon rises and increased non-CO2 greenhouse gases and “stored heat”. 

In summary, not only is the rise of atmospheric CO2 speeding up, but the effect that appears to have a much shorter lag time than we thought.  And so, there is real reason to fear that the global warming we are afraid of (presently estimated at 1.2-1.3 ppm above 1850 already) will in the near and intermediate term increase faster than we thought.

The Arctic Sea Melt Resumes


As late as a month ago, it was possible to argue that the ongoing melting away of Arctic sea ice was still on the “pause” it had been on since 2012.  And then, surprisingly, the usual refreezing occurring during October stopped.  It stopped for several weeks.  As a result, the average Arctic sea ice volume year-round reached a record low, and kept going.  And going.  As of the beginning of November, the average is now far below all previous years.

What caused this sudden stoppage?  Apparently, primarily unprecedented October oceanic and atmospheric heat in the areas where refreeze typically occurs in late October.  This is apparently much the same reason that minimum Arctic volume by some measures reached a new low in late September, despite a melting season with weather that in all previous years had resulted in much less melt than usual.

And what will prevent this from happening next year, and the year after?  Nothing, apparently (note that the 2016 el Nino had little or no effect on Arctic sea ice melt).  It now appears that we are still facing a September “melt-out” by the mid-2030s, at best.  I am happy that the direst predictions (2018 and thereabouts) are almost certainly not going to happen; and yet the scientific consensus has gone from “melt-out at 2100 only if we continue business as usual” to “melt-out around 2035 with much less chance of avoidance” in the last 5 years, and I hope against hope, given everything else that’s happening, that the forecast doesn’t slip again.

The Bottom Line:   Agility, Not Flexibility


I find, at the end, that I need to re-emphasize my concern, not just about the first 4 degrees C of temperature rise, but even more so the next, and the next.  And the next after that, the final rise.

I find that that a lot of people discount scientific warnings about loss of food production and so on, reasoning that the global market can, as it has done in the past, adapt to these problems at little cost, by using less water-intensive methods of farming, shifting farming production allocations north (and south) as the temperature increases, and handling disaster costs as usual while doing quick fixes on existing facilities to adapt.  In other words, our system is highly flexible; flexible enough to get us through the next 40-50 years, I think, while providing food for the developed world and probably the large majority of humanity. 

But, as a systems analyst will tell you, such a flexible system tends to make the inevitable crash far worse.  Patching existing processes (in climate change terms, adaptation) rather than fixing the problem at its root (in climate change terms, mitigation) causes over-investment in the present system that makes changing to a new, needed one far more costly – and therefore, far more likely to deep-six the company, or, in this case, the global market.

At a certain point, the average of national markets going south passes the average of global markets still growing, and then the cycle starts running in reverse:  smaller and smaller markets that can be serviced at higher and higher costs, with food scarcer and scarcer.  The only way to avoid total collapse into a system with inefficient production of the 1/10 of the food necessary for the survival of 9 billion people is mitigation; but the cost to do that is 10 times, 100 times what it was.  The result is brutal military dictatorships where the commander is the main rich person, as has happened so often throughout history.  Today’s rich will suffer less, because they make accommodations with the military; but they will on average suffer severely, by famine and disease. 

An agile system (here I am speaking about the ideal, not today’s usually far-from-agile companies) anticipates this eventuality, and moves far more rapidly towards fundamental change.  It can be done.  But, as of now, we are headed in the opposite direction.

Tuesday, October 25, 2016

The Cult of the Algorithm: Not So Fast, Folks

Sometimes I feel like Emily Litella in the old Saturday Night Live skit, huffing and puffing in offense while everyone wonders what I’m talking about.  That’s particularly true in the case of new uses of the word “algorithm.”  I find this in an interview by NPR of Cathy O’Neil, author of “Weapons of Math Destruction”, where part of the conversation is “We have these algorithms … we don’t know what they are under the hood … They don’t say, oh, I wonder why this algorithm is excluding women”.  I find this in the Fall 2016 Sloan Management Review, where one commenter says “developer ‘managers’ provide feedback to the workers in the form of tweaks to their programs or algorithms … the algorithms themselves are sometimes the managers of human workers.” 
As a once-upon-a-time computer scientist, I object.  I not only object, I assert that this is fuzzy thinking that will lead us to ignore the elephant in the living room of problems in the modeling of work/management/etc. to focus on the gnat on the porch of developer creation of software.
But how can I possibly say that a simple misuse of one computer term can have such large effects?  Well, let’s start by understanding (iirc) what an algorithm is.

The Art of the Algorithm

As I was taught it at Cornell back in the ‘70s (and I majored in Theory of Algorithms and Computing), an algorithm is an abstraction of a particular computing task or function that allows us to identify the best (i.e., usually, the fastest) way of carrying out that task/function, on average, in the generality of cases.  The typical example of an algorithm is one for carrying out a “sort”, whether that means sorting numbers from lowest to highest or sorting words alphabetically (theoretically, they are much the same thing), or any other variant.  In order to create an algorithm, one breaks down the sort into unitary abstract computing operations (e.g., add, multiply, compare), assigns costs to each, and then specifies the steps (do this, then do this).  Usually it turns out that one operation costs more than the others, and so sort algorithms can be reduced to considering the overall number of compares n as n increases from one to infinity.
Now consider a particular algorithm for sorting.  It runs like this:  Suppose I have 100 numbers to sort.  Take the first number in line, compare it to all the others, determine that it is the 23rd lowest.  Do the same for the second, third, … 100th number.  At the end, for any n, I will have an ordered, sorted list of numbers, no matter how jumbled the numbers handed to me are.
This is a perfectly valid algorithm.  It is also a bad algorithm.  For every 100 numbers, it requires at least 100 squared steps, and for any n, it requires at least n squared steps.  We say that this is an “order of n squared” or O(n**2) algorithm.  But now we know what to look for, so we take a different approach.
Here it is:  We go through the list of 100 numbers from both ends, and we find the maximum of the numbers from the low end and the minimum of the numbers from the high end, stopping when both sets of comparisons are dealing with the same number.  We then partition the list into two buckets, one containing the list up to that number (all of whose items are guaranteed to be less than that number), and one containing the list after that number (all of whose items are guaranteed to be greater than that number.  We repeat the process until we have reached buckets containing one number.  On average, there will be O(logarithm to the base two, or “log” of n) such splits, and each level of split performs O(n) comparisons.  So the average number of comparisons in this sorting algorithm is O(n times log n), called O(n log n) for short, which is way better than O(n**2) and explains why the algorithm is now called Quicksort.
Notice one thing about this:  finding a good algorithm for a function or task says absolutely nothing about whether that function or task makes sense in the real world.  What does the heavy lifting of creating a new program that is useful is more along the lines of a “model”, implicit in the mind of the company or person driving development, or additionally explicit in the program software actually carrying out the model.   An algorithm doesn’t say “do this”; it says, “if you want to do this, here’s the fastest way to do it.”

Algorithms and the Real World

So why do algorithms matter in the real world?  After all, any newbie programmer can write a program using the Quicksort algorithm, and there are a huge mass of algorithms available for public study in computer-science journals and the like.  The answer, I believe, lies in copyright and patent law.  Here, again, I know somewhat of the subject, because my father was a professor of copyright law and I held some conversations with him as he grappled with how copyright law should deal with computer software, and also because in the ‘70s I did a little research into the possibility of getting a patent on one of my ideas (it was later partially realized by Thinking Machines).
To understand how copyright and patent law can make algorithms matter, imagine that you are Google, 15 or so years ago.  You have a potential competitive advantage in your programs that embody your search engine, but what you would really like is to turn that temporary competitive advantage into a more permanent one, by patenting some of the code (not to mention copyrighting it to prevent disgruntled employees from using it in their next job).  However, patent law requires that this be a significant innovation.  Moreover, if someone just looks at what the program does and figures out how to mimic it with another search engine set of programs (a process called “reverse engineering”), then that does not violate your patent.
However, suppose you come up with a new algorithm?  In that case, you have a much stronger case for the program embodying that algorithm being a significant innovation (because your program is faster and [usually] therefore can handle many more petabytes or thousands of users), and the job of reverse engineering the program becomes much harder, because the new algorithm is your “secret sauce”. 
That means, if you are Google, that your new algorithm becomes the biggest secret of all, the piece of code you are least likely to share with the outside world – outsiders can’t figure out what is going on.  And all the programs written using the new algorithm likewise become much more “impenetrable”, even to many of the developers writing them.  It’s not just a matter of complexity; it’s a matter of preserving some company’s critical success factor.  Meanwhile, you (Google) are seeing if this new algorithm leads to another new algorithm – and that compounds the advantage and secrecy.
Now, let me pause here to note that I really believe that much of this problem is due to the way patent and copyright law adapted to the advent of software.  In the case of patent law, the assumption used to be that patents were on physical objects, and even if it was the idea that was new, the important thing was that the inventor could offer a physical machine or tool to allow people to use the invention.  However, software is “virtual” or “meta” – it can be used to guide many sorts of machines or tools, in many situations; at its best, it is fact a sort of “Swiss Army knife”.  Patent law has acted as if each program was physical, and therefore what mattered was the things the program did that hadn’t been done before – whereas if the idea was what mattered, as it does in software, then a new algorithm or new model should be what is patentable, not “the luck of tackling a new case”.
Likewise, in copyright law, matters were set up so that composers, writers, and the companies that used them had a right to be paid for any use of material that was original – it’s plagiarism that matters.  In software, it’s extremely easy to write a piece of a program that is effectively identical to what someone else has written, and that’s a Good Thing.  By granting copyright to programs that just happened to be the first time someone had written code in that particular way, and punishing those who (even if they steal code from their employer) could very easily have written that code on their own, copyright law can fail to focus on truly original, creative work, which typically is associated with new algorithms.
[For those who care, I can give an example from my own experience.  At Computer Corp. of America, I wrote a program that incorporated an afaik new algorithm that let me take a page’s worth of form fields and turn it into a good imitation of a character-at-a-time form update.  Was that patentable?  Probably, and it should have been.  Then I wrote a development tool that allowed users to drive development by user-facing screens, program data, or the functions to be coded in the same general way – “have it your way” programming.  Was that patentable? Probably.  Should it have been?  Probably not:  the basic idea was already out there, I just happened to be the first to do it.]

It's About the Model, Folks

Now let’s take another look at the two examples I cited at the beginning of this post.  In the NPR interview, O’Neil is really complaining that she can’t get a sense of what the program actually does.  But why does she need to see inside a program or an “algorithm” to do that?  Why can’t she simply have access to an abstraction of the program that tells her what the program does in particular cases?
In point of fact, there are plenty of such tools.  They are software design tools, and they are perfectly capable of spitting out a data model that includes outputs for any given input.  So why can’t Ms. O’Neil use one of those? 
The answer, I submit, is that companies developing the software she looks at typically don’t use those design tools, explicitly or implicitly, to create programs.  A partial exception to this is in the case of agile development.  Really good agile development is based on an ongoing conversation with users leading to ongoing refinement of code – not just execs in the developing company and execs in the company you’re selling the software to, but ultimate end users.  And one of the things that a good human resources department and the interviewee want to know is exactly what the criteria are for hiring, and why they are valid.  In other words, they want a model of the program that tells them what they want to know, not dense thickets of code or even of code abstractions (including algorithms).
My other citation seems to go to the opposite extreme:  to assume that automation of a part of the management task using algorithms reflects best management practices automagically, as old hacker jargon would put it.  But we need to verify this, and the best way, again, is to offer a design model, in this case of the business process involved.  Why doesn’t the author realize this?  My guess is that he or she assumes that the developer will somehow look at the program or algorithm and figure this out.  And my guess is that he/she would be wrong, because often the program involves code written by another programmer, about which this programmer knows only the correct inputs to supply, and the algorithms are also often Deep Dark Secrets.
Notice how a probably wrong conception of what an algorithm is has led to attaching great importance to the algorithm involved, and little to the model embodied by the program in which the algorithm occurs.  As a result, O’Neil appears to be pointing the finger of blame at some ongoing complexity that has grown like Topsy, rather than at the company supplying the software for failing to practice good agile development.  Likewise, the other cite’s belief in the magical power of the algorithm has led him/her to ignore the need to focus on the management-process model in order to verify the assumed benefits.  As I said in the beginning, they are focusing on the gnat of the algorithm and ignoring the elephant of the model embodied in the software.

Action Items

So here’s how such misapprehensions play out for vendors on a grand scale (quote from an Isabella Kaminska article excerpted in Prof. deLong’s blog, delong.typepad.com):  “You will have heard the narrative.... Automation, algorithms and robotics... means developed countries will soon be able to reshore all production, leading to a productivity boom which leads to only one major downside: the associated loss of millions of middle class jobs as algos and robots displace not just blue collar workers but the middle management and intellectual jobs as well. Except... there’s no quantifiable evidence anything like that is happening yet.”  And why should there be?  In the real world, a new algorithm usually automates nothing (it’s the program using it that does the heavy lifting) and the average algorithm does little except give one software vendor a competitive advantage over others.
Vendors of, and customers for, this type of new software product therefore have an extra burden:  ensuring that these products deliver, as far as possible, only verifiable benefits for the ultimate end user.  This is especially true of products that can have a major negative impact on these end users, such as hiring/firing software and self-driving cars.  In these cases, it appears that there may be legal risks, as well:  A vendor defense of “It’s too complex to explain” may very well not fly when there are some relatively low-cost ways of providing the needed information to the customer or end user, and corporate IT customers are likewise probably not shielded from end user lawsuits by a “We didn’t ask” defense.
Here are some action items that have in the past shown some usefulness in similar cases:
·         Research software design tools, and if possible use them to implement a corporate IT standard of providing documentation at the “user API” level specifying in comprehensible terms the outputs for each class of input and why.
·         Adopt agile development practices that include consideration and documentation of the interests of the ultimate end users.
·         Create an “open user-facing API” for the top level of end-user-critical programs, that allows outside developers to (as an intermediary) understand what’s going on, and as a side-benefit to propose and vet extensions to these programs.  Note that in the case of business-critical algorithms, this trades a slight increase in the risk of reverse engineering for a probable larger increase in customer satisfaction and innovation speedup.
Above all, stop deifying and misusing the word “algorithm.”  It’s a good word for understanding a part of the software development process, when properly used.  When improperly used – well, you’ve seen what I think the consequences are and will be.

Wednesday, September 21, 2016

August 16th, 2070: Rising Waters Flood Harvard Cambridge Campus, Harvard Calls For More Study of Problem


A freak nor’easter hit the Boston area yesterday, causing a 15-foot storm surge that overtopped the already precarious walls near the Larz Anderson Bridge.  Along with minor ancillary effects such as the destruction of much of Back Bay, the Boston Financial District, and Cambridgeport, the rampaging flood destroyed the now mostly-unused eastern Harvard Business School and Medical School, as well as the eastern Eliot and Lowell Houses.  The indigent students now housed in Mather House can only reach the dorms in floors 5 and above, by boat, although according to Harvard President Mitt Romney III the weakened structural supports should last at least until the end of the next school year.
A petition was hastily assembled by student protest groups last night, and delivered to President Romney at the western Harvard campus at Mt. Wachusett around midnight, asking for immediate mobilization of school resources to fight climate change, full conversion to solar, and full divestment from fossil-fuel companies. President Romney, whose salary was recently raised to $20 million due to his success in increasing the Harvard endowment by 10% over the last two years, immediately issued a press release stating that he would gather “the best and brightest” among the faculty and administration to do an in-depth study and five-year plan for responding to these developments.  Speaking from the David Koch Memorial Administrative Center, he cautioned that human-caused climate change remained controversial, especially among alumni.  He was seconded by the head of the Medical School, speaking from the David Koch Center for the Study of Migrating Tropical Diseases, as well as the head of the Business School, speaking from the David Koch Free-Market Economics Center. 
President Romney also noted that too controversial a stance might alienate big donors such as the Koch heirs, which in turn allowed indigent students to afford the $100,000 yearly tuition and fees.  He pointed out that as a result of recent necessary tuition increases, and the decrease in the number of students able to afford them from China and India due to the global economic downturn, the endowment was likely to be under stress already, and that any further alienation by alumni might mean a further decrease in the number of non-paying students.  Finally, he noted the temporary difficulties caused by payment of a $50 million severance package for departing President Fiorina.
Asked for a comment early this morning, David Koch professor of environmental science Andrew Wanker said, “Reconfiguring the campus to use solar rather than oil shale is likely to be a slow process, and according to figures released by the Winifred Koch Company, which supplies our present heating and cooling systems, we have a minimal impact on overall CO2 emissions and conversion will be extremely expensive.”  David Koch professor of climate science Jennifer Clinton said, “It’s all too controversial to even try to tackle.  It’s such a relief to consider predictions of further sea rise, instead.  There, I am proud to say, we have clearly established that the rest of the eastern Harvard campus will be underwater sometime within the next 100 years.”
In an unrelated story, US President for Life Donald Trump III, asked to comment on the flooding of the Boston area, stated “Who really cares?  I mean, these universities are full of immigrant terrorists anyway, am I right?  We’ll just deport them, as soon as I bother to get out of bed.”

Monday, September 19, 2016

HTAP: An Important And Useful New Acronym


Earlier this year, participants at the second In-Memory Summit frequently referred to a new marketing term for data processing in the new architectures:  HTAP, or Hybrid Transactional-Analytical Processing.  That is, “transactional” (typically update-heavy) and “analytical” (typically read-heavy) handling of user requests are thought of as loosely coupled, with each database engine somewhat optimized for cross-node, networked operations. 

Now, in the past I have been extremely skeptical of such marketing-driven “new acronym coinage,” as it has typically had underappreciated negative consequences.  There was, for example, the change from “database management system” to “database”, which has caused unending confusion about when one is referring to the system that manages and gives access to the data, and when one is referring to the store of data being accessed.  Likewise, the PC notion of “desktop” has meant that most end users assume that information stored on a PC is just a bunch of files scattered across the top of a desk – even “file cabinet” would be better at getting end users to organize their personal data.  So what do I think about this latest distortion of the previous meaning of “transactional” and “analytical”?

Actually, I’m for it.

Using an Acronym to Drive Database Technology


I like the term for two reasons:

1.       It frees us from confusing and outdated terminology, and

2.       It points us in the direction that database technology should be heading in the near future.

Let’s take the term “transactional”.  Originally, most database operations were heavy on the updates and corresponded to a business transaction that changed the “state” of the business:  a product sale, for example, reflected in the general ledger of business accounting. However, in the early 1990s, pioneers such as Red Brick Warehouse realized that there was a place for databases that specialized in “read” operations, and that functional area corresponded to “rolling up” and publishing financials, or “reporting”.  In the late 1990s, analyzing that reporting data and detecting problems were added to the functions of this separate “read-only” area, resulting in Business Intelligence, or BI (similar to military intelligence) suites with a read-only database at the bottom.  Finally, in the early 2000s, the whole function of digging into the data for insights – “analytics” – expanded in importance to form a separate area that soon came to dominate the “reporting” side of BI. 

So now let’s review the terminology before HTAP.  “Transaction” still meant “an operation on a database,” whether its aim was to record a business transaction, report on business financials, or dig into the data for insights – even though the latter two had little to do with business transactions.  “Analytical”, likewise, referred not to monthly reports but to data-architect data mining – even though those who read quarterly reports were effectively doing an analytical process.  In other words, the old words had pretty much ceased to describe what data processing is really doing these days.

But where the old terminology really falls down is in talking about sensor-driven data processing, such as in the Internet of Things.  There, large quantities of data must be ingested via updates in “almost real time”, and this is a very separate function from the “quick analytics” that must then be performed to figure out what to do about the car in the next lane that is veering toward one, as well as the deeper, less hurried analytics that allows the IoT to do better next time or adapt to changes in traffic patterns.

In HTAP, transactional means “update-heavy”, in the sense of both a business transaction and a sensor feed.  Analytical means not only “read-heavy” but also gaining insight into the data quickly as well as over the long term.  Analytical and transactional, in their new meanings, correspond to both the way data processing is operating right now and the way it will need to operate as Fast Data continues to gain tasks in connection to the IoT.

But there is also the word “hybrid” – and here is a valuable way of thinking about moving IT data processing forward to meet the needs of Fast Data and the IoT.  Present transactional systems operating as a “periodic dump” to a conceptually very separate data warehouse simply are too disconnected from analytical ones.  To deliver rapid analytics for rapid response, users also need “edge analytics” done by a database engine that coordinates with the “edge” transactional system.  Transactional and analytical systems cannot operate in lockstep as part of one engine, because we cannot wait as each technological advance in the transactional side waits for a new revision of the analytical side, or vice versa.  HTAP tells us that we are aiming for a hybrid system, because only that has the flexibility and functionality to handle both Big Data and Fast Data.

The Bottom Line


I would suggest that IT shops looking to take next steps in IoT or Fast Data try adopting the HTAP mindset.  This would involve asking oneself:

·         To what degree does my IT support both transactional and analytical processing by the new definition, and how clearly separable are they?

·         Does my system for IoT involve separate analytics and operational functions, or loosely-coupled ones (rarely today does it involve “one database fits all”)?

·         How well does my IT presently support “rapid analytics” to complement my sensor-driven analytical system?

If your answer to all three questions puts you in sync with HTAP, congratulations:  you are ahead of the curve.  If, as I expect, in most cases the answers reveal areas for improvement, those improvements should be at a part of IoT efforts, rather than trying to patch the old system a little to meet today’s IoT need.  Think HTAP, and recognize the road ahead.