Recurrent neural network
Recurrent neural networks (RNNs) are a form of a neural network that recognizes patterns in sequential information via contextual memory. Recurrent neural networks have been applied to many types of sequential information including text, speech, videos, music, genetic sequences and even clinical events .
RNNs can be contrasted with simple feed forward neural networks. Running multiple inputs through a simple 'feed forward' network will not change the bias of the network once trained. Rather than the traditional 'feed forward' architecture, recurrent neural networks operate with 'loops' (self loops, feedback loops, and backward connections) taking into account information received prior and contextualizing it.
Recurrent neural networks in radiology
The recurrent networks ability to 'remember' previous inputs is known as its 'memory', memory is a powerful tool when assessing data that requires context.
Certain types of imaging are sequential in nature, such as ultrasound video. In theory even individual 2D images can be treated as sequential patterns (if they are turned into a sequence of smaller images). However at present RNNs are more widely used in areas of radiology related to language. At present, their most common use in many radiology practices is in speech recognition software packages that radiologists use to create and transcribe reports. RNNs have also improved the process of disease annotation from electronic health record radiology reports as well as showing potential to generate text reports to accompany abnormality detection algorithms .