%0 Journal Article %1 magnussen2023continuous %A Magnussen, Birk Martin %A Stern, Claudius %A Sick, Bernhard %D 2023 %I ThinkMind %J International Journal On Advances in Intelligent Systems %K imported itegpub isac-www machine_learning neural_nets continuous_kernel irregularly_sampled_data reflection_spectroscopy %N 3&4 %P 43--50 %T Continuous Feature Networks: A Novel Method to Process Irregularly and Inconsistently Sampled Data With Position-Dependent Features %U http://www.thinkmind.org/index.php?view=article&articleid=intsys_v16_n34_2023_3 %V 16 %X Continuous Kernels have been a recent development in convolutional neural networks. Such kernels are used to process data sampled at different resolutions as well as irregularly and inconsistently sampled data. Convolutional neural networks have the property of translational invariance (e.g., features are detected regardless of their position in the measurement domain), which is unsuitable if the position of detected features is relevant for the prediction task. However, the capabilities of continuous kernels to process irregularly sampled data are still desired. This article introduces the continuous feature network, a novel method utilizing continuous kernels, for detecting global features at absolute positions in the data domain. Through a use case in processing multiple spatially resolved reflection spectroscopy data, which is sampled irregularly and inconsistently, we show that the proposed method is capable of processing such data directly without additional preprocessing or augmentation as is needed using comparable methods. In addition, we show that the proposed method is able to achieve a higher prediction accuracy than a comparable network on a dataset with position-dependent features. Furthermore, a higher robustness to missing data compared to a benchmark network using data interpolation is observed, which allows the network to adapt to sensors with a failure of individual light emitters or detectors without the need for retraining. The article shows how these capabilities stem from the continuous kernels used and how the number of available kernels to be trained affects the model. Finally, the article proposes a method to utilize the introduced method as a base for an interpretable model usable for explainable AI.