Intro | Deep connections between computer science and neuroscience | Many neuroscientific hypotheses are articulated in terms of types and features of neurons | Connecting networks to spikes through Hawkes processes | Weighted Hawkes Process dynamics | Reasoning about the posterior distribution of networks and latent variables given spikes | Inferring spatial map from spike trains alone Latent variables: locations. Adjacency: distance-dependent. Weight: non-negative. | Application to real primate retina data Inferring locations | Using hypotheses of neural computation to guide probabilistic modeling | Specifying the parameters and dependencies | "Recurrent" Switching Linear Dynamical System (rSLDS) | Recurrent dependencies carve up continuous space | Hierarchical SLDS shares parameters across worms while allowing individual variability | Inferred discrete states segment continuous trajectory into simple loops | Hierachical rSLDS is an automatic alternative for state segmentation | Denoising observed activity and predicting unobserved activity | Bayesian Inference | Standard SLDS admit block conditional updates | Recurrent dependencies break conjugacy | Outline | Computational challenges at all stages of the scientific process | Expanding the frontier of flexible and interpretable machine learning for science