Randomly connected recurrent neural networks (RNNs) serve as a parsimonious model of cortical dynamics, and could be used to model memory, decision making, and cognition. Machine-learning based variants of RNNs have recently gained popularity due to their utility in a wide array of application including speech recognition, medical outcome prediction, handwriting recognition, or robot control. However, the methods used in these applications are either too far removed from biological RNNs, or not biologically plausible. A biologically plausible method could both provide insight into how biological RNNs function, and improve performance in artificial systems. Here I describe my contributions towards biologically plausible algorithms for RNNs. I begin with an extension of balanced network theory, which creates a parsimonious model of neural dynamics. Existing RNN algorithms use simplified neuron models due to difficulties using more complex or realistic neuron models. Accounting for the spatially dependent structure observed in real cortical networks increases the reliability of the reservoir network, allowing it work in realistic spiking networks. Finally, I develop a biologically-inspired RNN algorithm, which solves an issue of unrealistic supervision found in most existing algorithms.