1 Introduction

Building predictive cognitive models of the world is often regarded as the essence of intelligence. It is one of the first skills that we develop as infants. We use these models to enhance our capability at learning more complex tasks, such as navigation or manipulating objects [50].

Unlike in humans, developing prediction models for autonomous vehicles to anticipate the future remains hugely challenging. Road agents have to make reliable decisions based on forward simulation to understand how relevant parts of the scene will evolve. There are various reasons why modelling the future is incredibly difficult: natural-scene data is rich in details, most of which are irrelevant for the driving task, dynamic agents have complex temporal dynamics, often controlled by unobservable variables, and the future is inherently uncertain, as multiple futures might arise from a unique and deterministic past.

Current approaches to autonomous driving individually model each dynamic agent by producing hand-crafted behaviours, such as trajectory forecasting, to feed into a decision making module [8]. This largely assumes independence between agents and fails to model multi-agent interaction. Most works that holistically reason about the temporal scene are limited to simple, often simulated environments or use low dimensional input images that do not have the visual complexity of real world driving scenes [49]. Some approaches tackle this problem by making simplifying assumptions to the motion model or the stochasticity of the world [8, 42]. Others avoid explicitly predicting the future scene but rather rely on an implicit representation or Q-function (in the case of model-free reinforcement learning) in order to choose an action [28, 34, 37].

Real world future scenarios are difficult to model because of the stochasticity and the partial observability of the world. Our work addresses this by encoding the future state into a low-dimensional future distribution. We then allow the model to have a privileged view of the future through the future distribution at training time. As we cannot use the future at test time, we train a present distribution (using only the current state) to match the future distribution through a Kullback-Leibler (KL) divergence loss. We can then sample from the present distribution during inference, when we do not have access to the future. We observe that this paradigm allows the model to learn accurate and diverse probabilistic future prediction outputs.

In order to predict the future we need to first encode video into a motion representation. Unlike advances in 2D convolutional architectures [27, 62], learning spatio-temporal features is more challenging due to the higher dimensionality of video data and the complexity of modelling dynamics. State-of-the-art architectures [63, 66] decompose 3D filters into spatial and temporal convolutions in order to learn more efficiently. The model we propose further breaks down convolutions into many space-time combinations and context aggregation modules, stacking them together in a more complex hierarchical representation. We show that the learnt representation is able to jointly predict ego-motion and motion of other dynamic agents. By explicitly modelling these dynamics we can capture the essential features for representing causal effects for driving. Ultimately we use this motion-aware and future-aware representation to improve an autonomous vehicle control policy.

Our main contributions are threefold. Firstly, we present a novel deep learning framework for future video prediction. Secondly, we demonstrate that our probabilistic model is able to generate visually diverse and plausible futures. Thirdly, we show our future prediction representation substantially improves a learned autonomous driving policy.

2 Related Work

This work falls in the intersection of learning scene representation from video, probabilistic modelling of the ambiguity inherent in real-world driving data, and using the learnt representation for control.

Temporal Representations. Current state-of-the-art temporal representations from video use recurrent neural networks [55, 56], separable 3D convolutions [26, 30, 61, 63, 65], or 3D Inception modules [7, 66]. In particular, the separable 3D Inception (S3D) architecture [66], which improves on the Inception 3D module (I3D) introduced by Carreira et al.  [7], shows the best trade-off between model complexity and speed, both at training and inference time. Adding optical flow as a complementary input modality has been consistently shown to improve performance [5, 19, 57, 58], in particular using flow for representation warping to align features over time [22, 68]. We propose a new spatio-temporal architecture that can learn hierarchically more complex features with a novel 3D convolutional structure incorporating both local and global space and time context.

Visual Prediction. Most works for learning dynamics from video fall under the framework of model-based reinforcement learning [17, 21, 33, 43] or unsupervised feature learning [15, 59], both regressing directly in pixel space [32, 46, 51] or in a learned feature space [20, 31]. For the purpose of creating good representations for driving scenes, directly predicting in the high-dimensional space of image pixels is unnecessary, as some details about the appearance of the world are irrelevant for planning and control. Our approach is similar to that of Luc et al.  [45] which trains a model to predict future semantic segmentation using pseudo-ground truth labels generated from a teacher model. However, our model predicts a more complete scene representation with segmentation, depth, and flow and is probabilistic in order to model the uncertainty of the future.

Multi-modality of Future Prediction. Modelling uncertainty is important given the stochastic nature of real-world data [35]. Lee et al.  [41], Bhattacharyya et al.  [4] and Rhinehart et al.  [52] forecast the behaviour of other dynamic agents in the scene in a probabilistic multi-modal way. We distinguish ourselves from this line of work as their approach does not consider the task of video forecasting, but rather trajectory forecasting, and they do not study how useful the representations learnt are for robot control. Kurutach et al.  [39] propose generating multi-modal futures with adversarial training, however spatio-temporal discriminator networks are known to suffer from mode collapse [23].

Our variational approach is similar to Kohl et al.  [38], although their application domain does not involve modelling dynamics. Furthermore, while Kohl et al.  [38] use multi-modal training data, i.e. multiple output labels are provided for a given input, we learn directly from real-world driving data, where we can only observe one future reality, and show that we generate diverse and plausible futures. Most importantly, previous variational video generation methods [16, 40] were restricted to single-frame image generation, low resolution (\(64\times 64\)) datasets that are either simulated (Moving MNIST [59]) or with static scenes and limited dynamics (KTH actions [54], Robot Pushing dataset [18]). Our new framework for future prediction generates entire video sequences on complex real-world urban driving data with ego-motion and complex interactions.

Learning a Control Policy. The representation learned from dynamics models could be used to generate imagined experience to train a policy in a model-based reinforcement learning setting [24, 25] or to run shooting methods for planning [11]. Instead we follow the approaches of Bojarski et al.  [6], Codevilla et al.  [13] and Amini et al.  [1] and learn a policy which predicts longitudinal and lateral control of an autonomous vehicle using Conditional Imitation Learning, as this approach has been shown to be immediately transferable to the real world.

Fig. 1.
figure 1

Our architecture has five modules: Perception, Dynamics, Present/Future Distributions, Future Prediction and Control. The Perception module learns scene representation features, \(x_t\), from input images. The Dynamics model builds on these scene features to produce a spatio-temporal representation, \(z_t\), with our proposed Temporal Block module, \(\mathcal {T}\). Together with a noise vector, \(\eta _t\), sampled from a future distribution, \(\mathcal {F}\), at training time, or the present distribution, \(\mathcal {P}\), at inference time, this representation predicts future video scene representation (segmentation, depth and optical flow) with a convolutional recurrent model, \(\mathcal {G}\), and decoders, \(\mathcal {D}\). Lastly, we learn a Control policy, \(\mathcal {C}\), from the spatio-temporal representation, \(z_t\).

3 Model Architecture

Our model learns a spatio-temporal feature to jointly predict future scene representation (semantic segmentation, depth, optical flow) and train a driving policy. The architecture contains five components: Perception, an image scene understanding model, Dynamics, which learns a spatio-temporal representation, Present/Future Distributions, our probabilistic framework, Future Prediction, which predicts future video scene representation, and Control, which trains a driving policy using expert driving demonstrations. Figure 1 gives an overview of the model and further details are described in this section and Appendix A.

3.1 Perception

The perception component of our system contains two modules: the encoder of a scene understanding model that was trained on single image frames to reconstruct semantic segmentation and depth [36], and the encoder of a flow network [60], trained to predict optical flow. The combined perception features \(x_t \in \mathbb {R}^{C\times H \times W}\) form the input to the dynamics model. These models can also be used as a teacher to distill the information from the future, giving pseudo-ground truth labels for segmentation, depth and flow \(\{s_t, d_t, f_t\}\). See Subsect. 4.1 for more details on the teacher model.

3.2 Dynamics

Learning a temporal representation from video is extremely challenging because of the high dimensionality of the data, the stochasticity and complexity of natural scenes, and the partial observability of the environment. To train 3D convolutional filters from a sequence of raw RGB images, a large amount of data, memory and compute is required. We instead learn spatio-temporal features with a temporal model that operates on perception encodings, which constitute a more powerful and compact representation compared to RGB images.

The dynamics model \(\mathcal {Y}\) takes a history of perception features \((x_{t-T+1}:x_t)\) with temporal context T and encodes it into a dynamics feature \(z_t\):

$$\begin{aligned} z_t = \mathcal {Y}(x_{t-T+1}:x_t) \end{aligned}$$
(1)

Temporal Block. We propose a spatio-temporal module, named Temporal Block, to learn hierarchically more complex temporal features as follows:

  • Decomposing the filters: instead of systematically using full 3D filters \((k_t, k_s, k_s)\), with \(k_t\) the time kernel dimension and \(k_s\) the spatial kernel dimension, we apply four parallel 3D convolutions with kernel sizes: \((1, k_s, k_s)\) (spatial features), \((k_t, 1, k_s)\) (horizontal motion), \((k_t, k_s, 1)\) (vertical motion), and \((k_t, k_s, k_s)\) (complete motion). All convolutions are preceded by a (1, 1, 1) convolution to compress the channel dimension.

  • Global spatio-temporal context: in order to learn contextual features, we additionally use three spatio-temporal average pooling layers at: full spatial size \((k_t, H, W)\) (H and W are respectively the height and width of the perception features \(x_t\)), half size \((k_t, \frac{H}{2}, \frac{W}{2})\) and quarter size \((k_t, \frac{H}{4}, \frac{W}{4})\), followed by bilinear upsampling to the original spatial dimension (HW) and a (1, 1, 1) convolution.

Figure 2 illustrates the architecture of the Temporal Block. By stacking multiple temporal blocks, the network learns a representation that incorporates increasingly more temporal, spatial and global context. We also increase the number of channels by a constant \(\alpha \) after each temporal block, as after each block, the network has to represent the content of the \(k_t\) previous features.

Fig. 2.
figure 2

A Temporal Block, our proposed spatio-temporal module. From a four-dimensional input \(z_{in} \in \mathbb {R}^{C\times T \times H \times W}\), our module learns both local and global spatio-temporal features. The local head learns all possible configurations of 3D convolutions with filters: \((1, k_s, k_s)\) (spatial features), \((k_t, 1, k_s)\) (horizontal motion), \((k_t, k_s, 1)\) (vertical motion), and \((k_t, k_s, k_s)\) (complete motion). The global head learns global spatio-temporal features with a 3D average pooling at full, half and quarter size, followed by a (1, 1, 1) convolution and upsampling to the original spatial dimension \(H\times W\). The local and global features are then concatenated and combined in a final (1, 1, 1) 3D convolution.

3.3 Future Prediction

We train a future prediction model that unrolls the dynamics feature, which is a compact scene representation of the past context, into predictions about the state of the world in the future. The future prediction model is a convolutional recurrent network \(\mathcal {G}\) which creates future features \(g_t^{t+i}\) that become the inputs of individual decoders \(\mathcal {D}_s, \mathcal {D}_d, \mathcal {D}_f\) to decode these features to predicted segmentation \(\hat{s}_t^{t+i}\), depth \(\hat{d}_t^{t+i}\), and flow \(\hat{f}_t^{t+i}\) values in the pixel space. We have introduced a second time superscript notation, i.e. \(g_t^{t+i}\), represents the prediction about the world at time \(t+i\) given the dynamics features at time t. Also note that \(g_t^{t} \triangleq z_t\).

The structure of the convolutional recurrent network \(\mathcal {G}\) is the following: a convolutional GRU [2] followed by three spatial residual layers, repeated D times, similarly to Clark et al.  [12]. For deterministic inference, its input is \(u_t^{t+i} = \mathbf {0}\), and its initial hidden state is \(z_t\), the dynamics feature. The future prediction component of our network computes the following, for \(i \in \{1,.., N_f\}\), with \(N_f\) the number of predicted future frames:

$$\begin{aligned} g_t^{t+i}&= \mathcal {G}(u_t^{t+i}, g_t^{t+i-1}) \end{aligned}$$
(2)
$$\begin{aligned} \hat{s}_t^{t+i}&= \mathcal {D}_s(g_t^{t+i}) \end{aligned}$$
(3)
$$\begin{aligned} \hat{d}_t^{t+i}&= \mathcal {D}_d(g_t^{t+i}) \end{aligned}$$
(4)
$$\begin{aligned} \hat{f}_t^{t+i}&= \mathcal {D}_f(g_t^{t+i}) \end{aligned}$$
(5)

3.4 Present and Future Distributions

From a unique past in the real-world, many futures are possible, but in reality we only observe one future. Consequently, modelling multi-modal futures from deterministic video training data is extremely challenging. We adopt a conditional variational approach and model two probability distributions: a present distribution P, that represents what could happen given the past context, and a future distribution F, that represents what actually happened in that particular observation. This allows us to learn a multi-modal distribution from the input data while conditioning the model to learn from the specific observed future from within this distribution.

The present and the future distributions are diagonal Gaussian, and can therefore be fully characterised by their mean and standard deviation. We parameterise both distributions with a neural network, respectively \(\mathcal {P}\) and \(\mathcal {F}\).

Present Distribution. The input of the network \(\mathcal {P}\) is \(z_t \in \mathbb {R}^{C_d \times H \times W}\), which represents the past context of the last T frames (T is the time receptive field of our dynamics module). The present network contains two downsampling convolutional layers, an average pooling layer and a fully connected layer to map the features to the desired latent dimension L. The output of the network is the parametrisation of the present distribution: \((\mu _{t,\text {present}}, \sigma _{t,\text {present}}) \in \mathbb {R}^L \times \mathbb {R}^L\).

Future Uistribution. \(\mathcal {F}\) is not only conditioned by the past \(z_t\), but also by the future corresponding to the training sequence. Since we are predicting \(N_f\) steps in the future, the input of \(\mathcal {F}\) has to contain information about future frames \((t+1, ..., t+N_f)\). This is achieved using the learned dynamics features \(\{z_{t + j}\}_{j \in J}\), with J the set of indices such that \(\{z_{t + j}\}_{j \in J}\) covers all future frames \((t+1, ..., t+N_f)\), as well as \(z_t\). Formally, if we want to cover \(N_f\) frames with features that have a receptive field of T, then:

$$J = \{nT ~ |~ 0 \le n \le \lfloor N_f / T \rfloor \} \cup \{N_f\}.$$

The architecture of the future network is similar to the present network: for each input dynamics feature \(z_{t+j} \in \mathbb {R}^{C_d \times H \times W}\), with \(j \in F\), we apply two downsampling convolutional layers and an average pooling layer. The resulting features are concatenated, and a fully-connected layer outputs the parametrisation of the future distribution:

$$(\mu _{t,\text {future}}, \sigma _{t,\text {future}}) \in \mathbb {R}^L \times \mathbb {R}^L.$$

Probabilistic Future Prediction. During training, we sample from the future distribution a vector \(\eta _t \sim \mathcal {N}(\mu _{t,\text {future}}, \sigma _{t,\text {future}}^2)\) that conditions the predicted future perception outputs (semantic segmentation, depth, optical flow) on the observed future. As we want our prediction to be consistent in both space and time, we broadcast spatially \(\eta _t \in \mathbb {R}^L\) to \(\mathbb {R}^{L\times H \times W}\), and use the same sample throughout the future generation as an input to the GRU to condition the future: for \(i \in \{1,..,N_f\}\), input \(u_t^{t+i} = \eta _t\).

We encourage the present distribution P to match the future distribution F with a mode-covering KL loss:

$$\begin{aligned} L_{\text {probabilistic}} = D_\text {KL}(F(\cdot | Z_t,..., Z_{t+N_f}) ~ || ~P(\cdot | Z_t)) \end{aligned}$$
(6)

As the future is multimodal, different futures might arise from a unique past context \(z_t\). Each of these futures will be captured by the future distribution F that will pull the present distribution P towards it. Since our training data is extremely diverse, it naturally contains multimodalities. Even if the past context (sequence of images \((i_1, ..., i_t)\)) from two different training sequences will never be the same, the dynamics network will have learned a more abstract spatio-temporal representation that ignores irrelevant details of the scene (such as vehicle colour, weather, road material etc.) to match similar past context to a similar \(z_t\). In this process, the present distribution will learn to cover all the possible modes contained in the future.

During inference, we sample a vector \(\eta _t\) from the present distribution \(\eta _t \sim \mathcal {N}(\mu _{t,\text {present}}, \sigma _{t,\text {present}}^2)\), where each sample corresponds to a different future.

3.5 Control

From this rich spatio-temporal representation \(z_t\) explicitly trained to predict the future, we train a control model \(\mathcal {C}\) to output a four dimensional vector consisting of estimated speed \(\hat{v}\), acceleration \(\hat{\dot{v}}\), steering angle \(\hat{\theta }\) and angular velocity \(\hat{\dot{\theta }}\):

$$\begin{aligned} \hat{c}_t = \{\hat{v}_t, \hat{\dot{v}}_t, \hat{\theta }_t, \hat{\dot{\theta }}_t\} = \mathcal {C}(z_t) \end{aligned}$$
(7)

\(\mathcal {C}\) compresses \(z_t \in \mathbb {R}^{C_d \times H \times W}\) with strided convolutional layers, then stacks several fully connected layers, compressing at each stage, to regress the four dimensional output.

3.6 Losses

Future Prediction. The future prediction loss at timestep t is the weighted sum of future segmentation, depth and optical flow losses. Let the segmentation loss at the future timestep \(t+i\) be \(L_s^{t+i}\). We use a top-k cross-entropy loss [64] between the network output \(\hat{s}_t^{t+i}\) and the pseudo-ground truth label \(s_{t+i}\). \(L_s\) is computed by summing these individual terms over the future horizon \(N_f\) with a weighted discount term \(0< \gamma _f< 1\):

$$\begin{aligned} L_s = \sum _{i=0}^{N_f-1} \gamma _f^i L_s^{t+i} \end{aligned}$$
(8)

For depth, \(L_d^{t+i}\) is the scale-invariant depth loss [44] between \(\hat{d}_t^{t+i}\) and \(d_{t+i}\), and similarly \(L_d\) is the discounted sum. For flow, we use a Huber loss betwen \(\hat{f}_t^{t+i}\) and \(f_{t+i}\). We weight the summed losses by factors \(\lambda _s, \lambda _d, \lambda _f\) to get the future prediction loss \(L_{\text {future-pred}}\).

$$\begin{aligned} L_{\text {future-pred}} = \lambda _s L_{s} + \lambda _d L_{d} + \lambda _f L_{f} \end{aligned}$$
(9)

Control. We use imitation learning, regressing to the expert’s true control actions \(\{v, \theta \}\) to generate a control loss \(L_c\). For both speed and steering, we have access to the expert actions.

We compare to the linear extrapolation of the generated policy’s speed/steering for future time-steps up to \(N_c\) frames in the future:

$$\begin{aligned} L_c = \sum _{i=0}^{N_c-1} \gamma _c^i&\left( \left( v_{t+i} - \left( \hat{v}_t + i \hat{\dot{v}}_t\right) \right) ^2 + \right. \nonumber \\&\left. \left( \theta _{t+i} - \left( \hat{\theta }_t + i\hat{\dot{\theta }}_t\right) \right) ^2 \right) \end{aligned}$$
(10)

where \(0< \gamma _c < 1\) is the control discount factor penalizing less speed and steering errors further into the future.

Total Loss. The final loss L can be decomposed into the future prediction loss (\(L_{\text {future-pred}}\)), the probabilistic loss (\(L_{\text {probabilistic}}\)), and the control loss (\(L_c\)) .

$$\begin{aligned} L = \lambda _{fp}{L_\text {future-pred}} + \lambda _c L_c + \lambda _p L_{\text {probabilistic}} \end{aligned}$$
(11)

In all experiments we use \(\gamma _f=0.6\), \(\lambda _s = 1.0\), \(\lambda _d =1.0\), \(\lambda _f = 0.5\), \(\lambda _{fp}=1\), \(\lambda _p=0.005\), \(\gamma _c=0.7\), \(\lambda _c=1.0\).

4 Experiments

We have collected driving data in a densely populated, urban environment, representative of most European cities using multiple drivers over the span of six months. For the purpose of this work, only the front-facing camera images \(i_t\) and the measurements of the speed and steering \(c_t\) have been used to train our model, all sampled at 5 

4.1 Training Data

Perception. We first pretrain the scene understanding encoder on a number of heterogeneous datasets to predict semantic segmentation and depth: CityScapes [14], Mapillary Vistas [48], ApolloScape [29] and Berkeley Deep Drive [67]. The optical flow network is a pretrained PWC-Net from [60]. The decoders of these networks are used for generating pseudo-ground truth segmentation and depth labels to train our dynamics and future prediction modules.

Dynamics and Control. The dynamics and control modules are trained using 30 hours of driving data from the urban driving dataset we collected and described above. We address the inherent dataset bias by sampling data uniformly across lateral and longitudinal dimensions. First, the data is split into a histogram of bins by steering, and subsequently by speed. We found that weighting each data point proportionally to the width of the bin it belongs to avoids the need for alternative approaches such as data augmentation.

4.2 Metrics

We report standard metrics for measuring the quality of segmentation, depth and flow: respectively intersection-over-union, scale-invariant logarithmic error, and average end-point error. For ease of comparison, additionally to individual metrics, we report a unified perception metric \(\mathcal {M}_{\text {perception}}\) defined as improvement of segmentation, depth and flow metrics with respect to the Repeat Frame baseline (repeats the perception outputs of the current frame):

$$\begin{aligned} \mathcal {M}_{\text {perception}} = \frac{1}{3} (\text {seg}_{\%\text {increase}} + \text {depth}_{\%\text {decrease}} + \text {flow}_{\%\text {decrease}}) \end{aligned}$$
(12)

Inspired by the energy functions used in [3, 53], we additionally report a diversity distance metric \(\text {(DDM)}\) between the ground truth future Y and samples from the predicted present distribution P:

$$\begin{aligned} \text {DDM}(Y, P)=\min _{S} \big [ d(Y,S) \big ] - \mathbb {E} \big [ d(S,S') \big ] \end{aligned}$$
(13)

where d is an error metric and S, \(S'\), are independent samples from the present distribution P. This metric measures performance both in terms of accuracy, by looking at the minimum error of the samples, as well as the diversity of the predictions by taking the expectation of the distance between N samples. The distance d is the scale-invariant logarithmic error for depth, the average end-point error for flow, and for segmentation \( d(x, y) = 1 - \text {IoU}(x, y)\).

To measure control performance, we report mean absolute error of speed and steering outputs, balanced by steering histogram bins.

5 Results

We first compare our proposed spatio-temporal module to previous state-of-the-art architectures and show that our module achieves the best performance on future prediction metrics. Then we demonstrate that modelling the future in a probabilistic manner further improves performance. And finally, we show that our probabilistic future prediction representation substantially improves a learned driving policy. All the reported results are evaluated on test routes with no overlap with the training data.

5.1 Spatio-Temporal Representation

We analyse the quality of the spatio-temporal representation our temporal model learns by evaluating future prediction of semantic segmentation, depth, and optical flow, two seconds in the future. Several architectures have been created to learn features from video, with the most successful modules being: the Convolutional GRU [2], the 3D Residual Convolution [26] and the Separable 3D Inception block [66].

We also compare our model to two baselines: Repeat frame (repeating the perception outputs of the current frame at time t for each future frame \(t+i\) with \(i=1,...,N_f\)), and Static (without a temporal model). As shown in Table 1, deterministic section, every temporal model architecture improves over the Repeat frame baseline, as opposed to the model without any temporal context (Static), that performs notably worse. This is because it is too difficult to forecast how the future is going to evolve with a single image.

Table 1. Perception performance metrics for two seconds future prediction on the collected urban driving data. We measure semantic segmentation with mean IoU, depth with scale-invariant logarithmic error, and depth with average end-point error. \(\mathcal {M}_{\text {perception}}\) shows overall performance — we observe our model outperforms all baselines.

Further, we observe that our proposed temporal block module outperforms all preexisting spatio-temporal architectures, on all three future perception metrics: semantic segmentation, depth and flow. There are two reasons for this: the first one is that learning 3D filters is hard, and as demonstrated by the Separable 3D convolution [66] (i.e. the succession of a \((1, k_s, k_s)\) spatial filter and a \((k_t, 1, 1)\) time filter), decomposing into two subtasks helps the network learn more efficiently. In the same spirit, we decompose the spatio-temporal convolutions into all combinations of space-time convolutions: \((1, k_s, k_s)\), \((k_t, 1, k_s)\), \((k_t, k_s, 1)\), \((k_t, k_s, k_s)\), and by stacking these temporal blocks together, the network can learn a hierarchically more complex representation of the scene. The second reason is that we incorporate global context in our features. By pooling the features spatially and temporally at different scales, each individual feature map also has information about the global scene context, which helps in ambiguous situations. Appendix A.3 contains an ablation study of the different component of the Temporal Block.

5.2 Probabilistic Future

Since the future is inherently uncertain, the deterministic model is training in a chaotic learning space because the predictions of the model are penalised with the ground truth future, which only represents a subset of all the possible outcomes. Therefore, if the network predicts a plausible future, but one that did not match the given training sequence, it will be heavily penalised. On the other hand, the probabilistic model has a very clean learning signal as the future distribution conditions the network to generate the correct future. The present distribution is encouraged to match the distribution of the future distribution during training, and therefore has to capture all the modes of the future.

During inference, samples \(\eta _t \sim \mathcal {N}(\mu _{t,\text {present}}, \sigma _{t,\text {present}}^2)\) from the present distribution should give a different outcome, with \(p(\eta _t | \mu _{t,\text {present}}, \sigma _{t,\text {present}}^2)\) indicating the relative likelihood of a given scenario. Our probabilistic model should be accurate, that is to say at least one of the generated future should match the ground truth future. It should also be diverse: the generated samples should capture the diversity of the possible futures with the correct probability. Next, we analyse quantitatively and qualitatively that our model generates diverse and accurate futures.

Table 2. Diversity Distance Metric for various temporal models evaluated on the urban driving data, demonstrating that our model produces the most accurate and diverse distribution.

Table 1 shows that every temporal architecture have superior performance when trained in a probabilistic way, with our model benefiting the most (from 13.6% to 20.0%) in future prediction metrics. Table 2 shows that our model outperforms other temporal representations also using the diversity distance metric (DDM) described in subsection 4.2. The DDM measures both accuracy and diversity of the distribution.

Perhaps the most striking result of the model is observing that our model can predict diverse and plausible futures from a single sequence of past frames at 5 Hz, corresponding to one second of past context and two seconds of future prediction. In Fig. 3 and Fig. 4 we show qualitative examples of our video scene understanding future prediction in real-world urban driving scenes. We sample from the present distribution, \(\eta _{t, j} \sim \mathcal {N}(\mu _{t,\text {present}}, \sigma _{t,\text {present}}^2)\), to demonstrate multi-modality.

Fig. 3.
figure 3

Predicted futures from our model while driving through an urban intersection. From left, we show the actual past and future video sequence and labelled semantic segmentation. Using four different noise vectors, \(\eta \), we observe the model imagining different driving manoeuvres at an intersection: being stationary, driving straight, taking a left or a right turn. We show both predicted semantic segmentation and entropy (uncertainty) for each future. This example demonstrates that our model is able to learn a probabilistic embedding, capable of predicting multi-modal and plausible futures.

Fig. 4.
figure 4

Predicted futures from our model while driving through a busy urban scene. From left, we show actual past and future video sequence and labelled semantic segmentation, depth and optical flow. Using two different noise vectors, \(\eta \), we observe the model imagining either stopping in traffic or continuing in motion. This illustrates our model’s efficacy at jointly predicting holistic future behaviour of our own vehicle and other dynamic agents in the scene across all modalities.

Further, our framework can automatically infer which scenes are unusual or unexpected and where the model is uncertain of the future, by computing the differential entropy of the present distribution. Simple scenes (e.g. one-way streets) will tend to have a low entropy, corresponding to an almost deterministic future. Any latent code sampled from the present distribution will correspond to the same future. Conversely, complex scenes (e.g. intersections, roundabouts) will be associated with a high-entropy. Different samples from the present distribution will correspond to different futures, effectively modelling the stochasticity of the future.Footnote 1

Finally, to allow reproducibility, we evaluate our future prediction framework on Cityscapes [14] and report future semantic segmentation performance in Table 3. We compare our predictions, at resolution \(256\times 512\), to the ground truth segmentation at 5 and 10 frames in the future. Qualitative examples on Cityscapes can be found in Appendix C.

Table 3. Future semantic segmentation performance on Cityscapes at \(i=5\) and \(i=10\) frames in the future (corresponding to respectively 0.29 s and 0.59 s).

5.3 Driving Policy

We study the influence of the learned temporal representation on driving performance. Our baseline is the control policy learned from a single frame.

First we compare to this baseline a model that was trained to directly optimise control, without being supervised with future scene prediction. It shows only a slight improvement over the static baseline, hinting that it is difficult to learn an effective temporal representation by only using control error as a learning signal.

Table 4. Evaluation of the driving policy. The policy is learned from temporal features explicitly trained to predict the future. We observe a significant performance improvement over non-temporal and non-future-aware baselines.

All deterministic models trained with the future prediction loss outperform the baseline, and more interestingly the temporal representation’s ability to better predict the future (shown by \(\mathcal {M}_{\text {perception}}\)) directly translate in a control performance gain, with our best deterministic model having, respectively, a 27% and 38% improvement over the baseline for steering and speed.

Finally, all probabilistic models perform better than their deterministic counterpart, further demonstrating that modelling the uncertainty of the future produces a more effective spatio-temporal representation. Our probabilistic model achieves the best performance with a 33% steering and 46% speed improvement over the baseline.

6 Conclusions

This work is the first to propose a deep learning model capable of probabilistic future prediction of ego-motion, static scene and other dynamic agents. We observe large performance improvements due to our proposed temporal video encoding architecture and probabilistic modelling of present and future distributions. This initial work leaves a lot of future directions to explore: leveraging known priors and structure in the latent representation, conditioning the control policy on future prediction and applying our future prediction architecture to model-based reinforcement learning.