Blessing of dimensionality at the edge and geometry of few-shot learning
AffiliationUniversity of Leicester
Lobachevsky University, Russia
St Petersburg State Electrotechnical University, Russia
University College London
Northeastern University, China
Norwegian University of Science and Technology, Norway
University of Derby
MetadataShow full item record
AbstractIn this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) systems to continuously and incrementally improve with a priori quantifiable guarantees – or more specifically remove classification errors – over time. This is distinct from state-of-the-art machine learning, AI, and software approaches. The theory enables building few-shot AI correction algorithms and provides conditions justifying their successful application. Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples. At the time of classification, the computational complexity is bounded by few inner product calculations. Moreover, the implementation is shown to be very scalable. This makes it viable for deployment in applications where computational power and memory are limited, such as embedded environments. It enables the possibility for fast on-line optimisation using improved training samples. The approach is based on the concentration of measure effects and stochastic separation theorems and is illustrated with an example on the identification faulty processes in Computer Numerical Control (CNC) milling and with a case study on adaptive removal of false positives in an industrial video surveillance and analytics system.
CitationTyukin, I.Y., Gorban, A.N., McEwan, A.A., Meshkinfamfard, S. and Tang, L., 2021. Blessing of dimensionality at the edge and geometry of few-shot learning. Information Sciences, 564, pp. 124-143.