No Code AI, No Kidding Aye – Part II

Author: Monjima Nandi

Challenges addressed by No Code AI platforms

An AI model building is challenging on three fundamental counts:

  1. Availability of relevant data in good quantity and quality: The less I rant about it, the better.
  2. Need for multiple skills: Building an effective and monetizable AI model is not just the realm of a data scientist alone. It needs data engineering skills and domain knowledge also.
  3. The constant evolution of the ecosystem in terms of new techniques, approaches, methodologies, and tools

There is no easy way out to address the first challenge, at least not so far. So, let us brush that under the carpet for now.

The need for having multiple resources with complementing skills is an area where a no-code AI platform can add tremendous value. The average data scientist spends half of his/her time preparing and cleaning the data needed to build models and the other half fine-tuning the model for optimum performance. No Code AI platforms (such as Subex HyperSense) can step in with automated data engineering and ML programming accelerators that go a long way in alleviating the requirement of having a multi-skilled team.  What’s more, it empowers even Citizen Data Scientists with the ability to build competent AI models without having the need to know any programming language or having any background in data engineering. Platforms like HyperSense provide advanced automated data exploration, data preparation, and multi-source data integration capabilities using simple drag-and-drop interfaces. It combines this ability with a rich visual representation of the results at every step of the process so that one does not need to wait until the end to realize an error that was done in an early step and have to go back and make changes everywhere.

As I briefly touched upon a while back, getting the data ready is one-half of the battle won. The plethora of options on the other half is still perplexing – Is it a bird? Is it a plane? Oh no, it is Superman! Well, in our context – it would be more like – Is it DBSCAN? Is it a Gaussian Mixture? Oh no, it is K-Means! Feature engineering and experimenting with different algorithms to get the most optimum results is a specialized skill. It requires an in-depth understanding of the data set, domain knowledge, and principles of how various algorithms work. Here again, No Code AI platforms like HyperSense come to the table with significant value adds. With capabilities like autonomous feature engineering and multi-algorithm trial and benchmarking, I daresay that it makes building models almost child’s play. Please do not get me wrong. I am not for a moment suggesting that these platforms will result in the extinction of the technical data scientist role, on the contrary, it will make them more efficient and give them superpowers to solve greater problems in lesser time while managing and guiding teams of citizen data scientists to solve the more mundane, yet, problem statements of existential importance.

So far, so good; and having brushed one challenge under the carpet and discussed the other one, there is one more – The constant evolution of AI techniques, methodologies, tools, and technologies. Today, just being able to build a model which performs well on a pre-defined set of metrics does not cut ice anymore. It is just not enough for a model to be simply accurate. As the AI landscape evolves, the chorus for the Explainability and Accountability in models is reaching a fever pitch. Why did K-Means give you a better result than Gaussian Mixture? Will, you then get the same result if a feature was modified or a new one added? Why did the model predict a similar outcome for most customers belonging to a certain ethnicity? Is the model replicating the bias and vagaries present in the historical data set or the person building the model? If there have been policies and practices in a business where any sort of decision bias crept into day-to-day functioning, it is but natural that the data sets you work on will have those biases and the model you build will continue to persuade you to make decisions with the same biases as before. As an organization that is striving to disrupt and transform your industry, it is pertinent that you identify and weed out such biases sooner than later before your AI models hit scale and it becomes a wild animal out of its cage.

As No Code AI platforms evolve, model explainability is something that is already getting addressed. Platforms like HyperSense give you the option to open up the proverbial ‘black-box’ and peep inside to see why a model behaved the way it did. It provides the analyst or the data scientist with an opportunity to tinker around advanced settings and fine-tune them to meet the objectives. Model accountability and ethics is a whole different ball game altogether. It is not restricted just to technology but also the frailties of human beings as a species. I am sure the evolving AI ecosystem will eventually figure out a way to make the world free of human biases – but hey, where’s the fun then? Human biases do make the world interesting and despicable in equal measure and I believe the holy grail for AI will be to strike a balance between the two.

Until then, let us empower more and more creative and business stakeholders to explore and unleash the true power of AI using No Code platforms like HyperSense so that the world can be a better place for all life forms.

Go to Source