Can we rely on artificial intelligence?

Article by Sylvie Gamet initially published on Forbes France on 11/11/2019

Artificial intelligence (AI) is a field that leaves few people indifferent and activates the wildest fantasies, both anxiogenic and utopian. So much so that it is difficult to form an opinion. We met Renaud Allioux, co-founder and CTO of Earthcube, a leading French start-up in this field.

Renaud Allioux’s experience is all the more interesting because it is part of the controversial field of AI for defense, or “DefenseTech”. We very quickly think of the liberticidal practices that came from China, the killer robots, the armies of drones… It is not the case here. This French start-up has been developing for 2 years an artificial intelligence solution designed to monitor strategic sites on the basis of satellite images. The objective is to allow clients to save time in monitoring and analyzing geopolitical situations, for example.

According to Renaud Allioux, to be exploited in the industry on a large scale, artificial intelligence must be able to guarantee high reliability. This need for performance is difficult to achieve because many parameters, often very specific, are at stake. The role of a product based on AI is to integrate all these parameters, to process them through various advanced algorithms and trained on a mass of relevant business data, where each algorithm could be tested on real data sets while guaranteeing a high level of performance. In addition, it is imperative to identify possible biases in algorithms, generated by the code, developers, desired results or input data. The result advanced then by the AI must be sufficiently trustworthy and relevant to have usefulness, an added value for the industries. In the field of defense, in particular, the right to make mistakes has to be minimal, as each decision taken raises an issue that is measured in terms of geopolitical tension rates or human lives.

Earthcube is a start-up that develops AI at this level of performance (less than 10% errors in real life, not on simulated or laboratory data), while other start-ups most often use open source modules offered by major market players. Nevertheless, according to Mr. Allioux, these modules are used rather for prototypes, but as they are standard elements not trained on business data, the results obtained are very unreliable, subject to uncontrolled bugs and bias and difficult to detect by non-experts in the field. A simulation that would show a 90% performance rate on a laboratory prototype would fall to 25% on real business data sets.

Large-scale deployment of AI would, therefore, seem to be a relatively distant concept.

The AI then proves to be a demanding tool that can bring high progress in different fields such as image recognition, autonomous car, facial recognition, speech synthesis… However, to be relevant and efficient, the AI can only be fully demonstrated today if modules are developed on a specific issue. Current attempts to design generic modules, therefore, do not seem to constitute effective applications with any added value other than buzz. Just as companies have relied on particular specialties and there are very few companies that are good at doing everything, AI seems to face the same limitations. Even if we imagine a multiplication of specialized AIs, we would have to make them work together afterward to obtain more coherent general behaviors. The human being, therefore, occupies an important place in the choices that are and will be made, strongly conditioned by ethical considerations to avoid bias, uncontrolled black boxes and the dangers of determinism. Who will decide that a particular result from an AI or a combination of AI is relevant, coherent? On what criteria should we base ourselves, knowing that everyone can bring a different point of view to different situations? One only has to consider the polemics that poison our society to realize the complexity of the subject, but also its dangers.

Can Europe influence the development of Artificial Intelligence?

There is no doubt that there is a long work to be done to educate people about these issues and to raise awareness of ethical dangers. Europe would undoubtedly have much to gain from being at the forefront of regulating these technological advances, as it has shown with the protection of personal data (GDPR). This regulation must be realistic, certainly limiting excessive and liberticidal excesses (or worse), but finding the balance so as not to lock in too strongly the developments desirable for the common good.

In addition, Europe is one of the world’s leading providers of AI development experts. While many come under foreign flags to pursue a career, it is still reassuring to imagine that their influences remain imbued with the moral values generally accepted in Europe.

To maintain a key role in this new era, the question of financing programs or companies developing effective AI is also an essential issue. Indeed, without the funding needed to deploy them, it will be difficult to influence the game against American or Asian players, let alone impose any form of regulation.

Where we could think that start-ups developing artificial intelligence are only R&D factories to serve the interests of larger players, Earthcube demonstrates in a quite exemplary way that it is possible to develop a specialized and efficient artificial intelligence in 2 years and to interest leading players to the point of generating a turnover and having validated its business model. The start-up now has about forty employees and should increase to 100 in 2020 with an international roll-out as a project. To finance its growth, it has raised €7 million and plans another fundraising in the short to medium term. Its monthly recurring income is expected to increase by a factor of 8 to 10 by the end of 2019, compared to 2018.

Autres articles


Nous produisons 5 newsletters par an. Recevez nos publications et actualités sur l’innovation !

100% Privé. Pas de spam.