Generalized framework for liquid neural network upon sequential and non-sequential tasks

Abstract

This paper introduces a novel approach to neural networks: a Generalized Liquid Neural Network (GLNN) framework. This design excels at handling both sequential and non-sequential tasks. By leveraging the Runge Kutta DOPRI method, the GLNN enables dynamic simulation of complex systems across diverse fields. Our research demonstrates the framework’s capabilities through three key applications. In predicting damped sinusoidal trajectories, the Generalized LNN outperforms the neural ODE by approximately 46.03% and the conventional LNN by 57.88%. Modelling non-linear RLC circuits shows a 20% improvement in precision. Finally, in medical diagnosis through Optical Coherence Tomography (OCT) image analysis, our approach achieves an F1 score of 0.98, surpassing the classical LNN by 10%. These advancements signify a significant shift, opening new possibilities for neural networks in complex system modelling and healthcare diagnostics. This research advances the field by introducing a versatile and reliable neural network architecture.

Keywords

Generalized Liquid Neural Network (GLNN), neural ordinary differential equations (ODEs), Runge-Kutta DOPRI 5 method, non-sequential task processing, Optical Coherence Tomography (OCT) image classification

Link to Publisher Version (URL)

10.3390/math12162525

This document is currently not available here.

Find in your library

Share

COinS