By Pierre Cordelle, Senior Associate at Aster
After the first episode « SW is eating the world », the second opus « AI transforms all sectors » is now out. But let’s not stop to this noisy tagline. AI does not transform all companies. And AI does not transform all sectors as fast as one could think. The main reason behind that is simple: deploying artificial intelligence is resource intensive and challenging for most businesses. Specifically, companies have a hard time attracting and recruiting talents in data-science. Lack of speed, lack of budget… startups look much more attractive to many young graduates. Large corporates such as Air France or Renault make a try to spread AI within their organizations by launching some agile and well-funded internal entities (aka. digital factories), mimicking the start-up codes. One thing is for sure, for those who can’t afford to create such entities, meaning most SMEs and plenty of large corporates that are lagging behind, hiring experienced data-science profiles is a nightmare. And transforming their organization with AI seems a distant dream.
Things could change: computing performances continue to increase as fast as costs drop, AI continues to be a field where the open source is never that far from the state of the art, and AI requires less and less training. These three factors have enabled the development of platforms that facilitate or even automate the creation of machine learning applications. A kind of AI to create AI applications. The so-called Automated Machine Learning (or autoML) platforms aim at enabling data-scientists or even any engineer to build and use AI models.
To understand how AI automation can be enabled, one needs to go back to the basics and understand how AI applications are usually created:
- Step 1. Design and Discovery: identify a business opportunity, estimate the value creation potential, and estimate the feasibility based on the data-sets that are available.
- Step 2. Development of ML Models: clean the data-sets, annotate them if needed, define the output of your application, find the right models and algorithms to estimate the output with the best accuracy.
- Step 3. Deployment of ML Models: create a prototype that turn your algorithm into a usable application, test it, integrate it into the workflows of your customers or employees, and change behavior.
Finding, testing, and training a lot of algorithms at Step 2 is time-consuming. And finding the right one with a limited number of iterations is the privilege of a few experts. The promise of autoML is to automate or accelerate these repetitive, costly, low added value tasks. It is enabled thanks to a significant increase in computing capabilities and in the availability of algorithms. Additionally, having a unique tool covering the last two steps reduces time-consuming data manipulation.
DataRobot, a US-based company that recently raised an impressive $100m Series D Round, or H20.ai, are the first leaders in this new category of autoML tools. And the autoML trend is not expected to vanish as fast as it came. Three reasons, and three recommended readings:
- Billions have been invested in start-ups developing specific chips for AI computing in 2017 and 2018, without mentioning the endeavors of cloud giants to build their own chip, such as Google with its TPU. Moore’s Law is outdated, but the computing capabilities to tackle AI challenges are still progressing at a very high pace. AutoML platforms that test and learn different algorithms will get faster as hardware performances increase.
- The AI Community is mostly built around open-source frameworks provided by tech giants such as Google and its TensorFlow framework. Open-source framework attract contributions or research papers from the community that fasten the development of the framework and thus increase user adoption, enabling a virtuous cycle. Auto ML platforms will benefit from the wide availability of this intense research activity.
- Transfer Learning technologies reduces data requirements. Models that have been trained on extensive data-sets for a specific task might be suitable, as a starting point, for another problem. Models that have been trained for visual recognition are sensitive to shapes, edges, etc. They are thus an interesting common ground for any visual recognition application. By reducing the need to tag important data-sets, Transfer Learning lowers the barriers for any autoML.
As the market gets larger, subcategories are thus emerging: some platforms focus on learning from specific data types (Clarifai, images), some focus on one specific sectors (Owkin, medical sector).
We do expect the creation of industry specific platforms, that would stand out from others in their ability to deploy the freshly created AI applications in bad connectivity environments, « on the edge », to enable the creation of applications from smaller datasets, and to avoid the black-box effect by pinpointing the root-causes of a prediction. One thing is for sure: if these platforms might be of great help to accelerate the job of data-scientists, the simple user-interface of these tools should also widen the target group of users to business analysts and any « dataholic » engineer, solving in some cases the tricky hiring challenge of many companies.
It doesn’t mean that data scientist will become useless. It means that their value creation will, in many companies, refocus on ideation and the design of the applications. And it also means that many companies that may have fallen of the side of the AI transformation have an opportunity to be back on track with the leaders.
 More info here from Patrick Shafto, Associate Professor @ Rutgers University Newark: https://blog.hardwareclub.co/theres-an-investment-frenzy-over-ai-chip-startups-d9b5ea42b5c4
 More info here from Patrick Shafto, Associate Professor @ Rutgers University Newark: https://theconversation.com/why-big-tech-companies-are-open-sourcing-their-ai-systems-54437
 More info here from the Integrate.ai blog: https://medium.com/the-official-integrate-ai-blog/transfer-learning-explained-7d275c1e34e2