Is AI Getting Boring? And Some Thoughts on the Future of AI.

Author: William Vorhies

Summary:  Things are getting repetitious and that can be boring.  Still, looking at lessons from the 90s it’s clear there are at least one or two decades of important economic advances that will based on our current AI/ML.  Then some thoughts on where that next really huge breakthrough will come from that will return our initial excitement.

 

Is AI getting boring?  Is it just me or do we seem to be doing the same things over and over again?  Let me explain.  If you’ve only been in the data science profession for say four or five years then everything no doubt still seems new and exciting.  But these days we are well into the age of implementation and we’re definitely on a long plateau in terms of anything transformative.

I hung out my shingle as a data scientist in 2001.  Those were pretty bleak days for the art.  We only had structured data to work with and the only regular adopters were in direct mail, big finance, and insurance.  Every trip to a new prospect started with a long explanation of what data science is (then predictive modeling) and why they needed it.  Not that many takers.

But starting with the open source advent of Hadoop in 2008 things really started to bust open.  Not all at once of course but now we had unstructured and semi-structured data to work with.  Combined with advances in chips, and compute speed thanks to parallelization things like speech recognition, text processing, image classification, and recommenders became real possibilities.

It took the next eight or nine years to fully develop but by 2017 speech and text recognition had reached 95% accuracy, beyond the threshold of human performance.  And CNNs were rapidly knocking down the records for image classification.  These days who isn’t speaking to Alexa or Siri as a primary interface with their devices?

For many of those years I would trek up to San Jose to the annual Strata conference in March and breathlessly report all the breakthroughs.  But in case you didn’t notice, by 2017 all that was through.  In 2017 we abandon Hadoop for Spark with its ever more integrated not-only-sequel stack when NoSQL and SQL really did come back together.  And by 2018, after Strata I had to report that there we no new eye-catching developments anywhere to be seen.

We saw it coming in 2017 and by 2018 it was official.  We’d hit maturity and now our major efforts were aimed at either making sure all our powerful new techniques work well together (converged platforms) or making a buck from those massive VC investments in same.

Now we’re in the age of implementation, learning to apply AI/ML for every dollar that it’s worth.  This is actually the most important time because we take our matured toys and apply them with industrial strength to enhance productivity, profits, and the customer experience.

The major advances of the last couple of years are incremental improvements in cloud compute and DNNs, the emergence of automated machine learning (AML), and the maturity of the AI Platform strategy

Analytic platforms are ever more integrated.  Selecting and tuning the best model, and even data prep is much less of a chore.  And if you look at the most recent Gartner Magic Quadrant for these platforms practically everyone is now in the upper right quadrant.  Everyone’s a winner.  Take your pick.

 

Lessons from the 90’s

The broad adoption of AI/ML and cloud compute are the most powerful economic drivers of the global economy and will likely remain so for at least a decade or two.  I along with many VCs have been deeply influenced by Carlota Perez’s short but convincing treatise “Technological Revolutions and Financial Capital”.  She makes the academic and historical case for what we see with our own eyes about the power of these two technologies as the primary drivers of economic growth.

If we look back we can see exactly the same thing occurred in the 90s when computerization swept across the global economic scene.  At the time we called it ‘automation’ but given what we know now in the age of AI you need to see this as the first wide spread application of computers in business.

There are lots of parallels with what’s going on now.  In the early and mid-90s as a director in the consulting practices of the Big 5, we were coming out of a fairly toothless phase of TQM (Total Quality Management) and into the more productive techniques of process improvement.  Still no computer automation applied.

In 1993 Thomas Davenport wrote his seminal work “Process Innovation, Reengineering Work through Information Technology” and set us on the road to adding computer automation to everything.

Similar to where we’ve been in AI/ML for the last five years, the methods of reengineering that Davenport espoused required a radical ground-up reimagining of major processes followed by a grueling and expensive usually one to two years of custom development of the then nascent computer automation techniques. 

This was all about breaking new ground where no patterns yet existed and only the richest and bravest companies dared lead the way.  Failures were rampant. 

That sounds a lot like our most recent experiences in AI/ML where the majority of models fail to make it to production.  The only good news is that the financial scale of these failures is measured in man weeks of time over a few months instead of armies of programmers spread over 12 to 24 months as was then the case.

Also similar to today, within the space of a few short years vendors began packaging up reusable bits of these computer automated processes and selling them across similar industries.  A little up front configuration and you could reuse the solutions that others had paid for. 

More important and absolutely parallel to today, the vendors’ programmers (in our case data scientists) maintained and continued to improve the tough bits so investment in scarce human resources was dramatically reduced as was project risk.

Initially these reusable programs were aimed at fairly specific processes like finance, HR, and MRP.  But allowing for broader configuration during implementation allowed these industry and process-specific programs to be used across a wider range of cases.

Customer’s actual experience with that initial set up and configuration was typically terrible.  It took a long time.  It was expensive.  Lots of mistakes were made along the way.  And once you got it up and running the process had been so expensive, physically grueling, and now so completely integrated into your business that the thought of switching to a competitor’s new and improved platform was almost unthinkable.  Good for the vendors.  Bad for the customers.

I trust you see where this is going.  Eventually these platforms were rolled up into expansive ERPs (enterprise resource planning platforms) now dominated by PeopleSoft, Oracle, SAP, and Workday.

History is our guide.  These are the forces at work in the broader AI/ML cloud compute market today.  The first in will be difficult to unseat.  The next few years will be all about M&A rollups and the battle for share not differentiating on newly discovered techniques.

 

Where to from Here?

The ERP adoption model from the 90s ran through the early 00s when essentially everyone had one.  Curiously that’s almost exactly the time AI/ML got seriously underway.  There’s another 10 or 20 years of serious adoption here that will be good for business, good for consumers, and good for many of your careers.

To come back to the original theme though, when and where can we expect the next transformative breakthrough in data science?  When can we data scientists really get excited again?

ANNs are not likely to be where it’s at.  I’m not alone in that suspicion and many of our best thinkers are wondering if this architecture can continue to incrementally improve.  Maybe we need a radical rethink.

The problems are well known.  Too much training data.  Too much compute.  Too much cost and time.  And even with techniques like transfer learning the models don’t adapt well to change and can’t transfer deep learning from one system to another.

Assuming that the next great thing has even yet been imagined (perhaps it hasn’t) my bet is on neuromorphic chips.  We need to get past the architecture where every neuron is connected to every other neuron and fires every time.  It’s not true in human brains so why should it be true in our ANNs. 

Also, there’s plenty of evidence and work being done in neuromorphics to use not just the on and off status of a neuron, but also the signal that might be embedded in the string of electrical spikes that occur when our neurons fire.  There is probably useful compute or data in the amplitude, number, and time lag between those spikes.  We just need to figure it out.

Even once we get good neuromorphic chips, I’m not worried about sentient robots in my life time.  Our most advanced thinking about neuromorphics still means they can only learn from what they observe (training) and not that they will be able to invent the imaginary social structures that humans use to cooperate like religions, or nation states, or limited liability corporations.

There are also some techniques in data science that we’ve just passed by.  I got my start in data science working with genetic algorithms.  Through the first 10 or 15 years of my experience I could get a better model faster every time with an advanced genetic algorithm.

At the time when ANNs were too slow and too compute hungry genetic algorithms briefly were in ascendance.  But largely due to commercial indifference and faster, cheaper compute with parallelization ANNs made a comeback.  I wouldn’t be too quick to write off techniques like genetic algorithms that closely mimic nature as potential pathways to the future.

Some folks will point at GANs as a way forward.  I don’t see it.  They can create hypothetically new images and therefore training data but only to the extent that they’ve been trained on existing real world objects.  Once again, no potential for a transfer of learning technique in a wholly different object set.

Quantum computing?  Maybe.  My reading in the area so far is that it does the same things we do now, classification, regression, and the like just a whole lot faster.  Also, I think commercial adoption based on sound financial business cases is further down the road than we think.

Still, there is always reason to hope that you and I will be around for the next big breakthrough innovation.  Something so startling that it will both knock our socks off and at the same time make us say, oh that’s so obvious, why didn’t I think of that.

Also, there are some very solid and satisfying careers to be made in making the most of what we’ve got.  That’s still a very worthwhile contribution.

 

 

Other articles by Bill Vorhies.

About the author:  Bill is a Contributing Editor for Data Science Central.  Bill is also President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist since 2001.  His articles have been read more than 2.5 million times.

Bill@DataScienceCentral.com or Bill@Data-Magnum.com

Go to Source