key: cord-0151463-v95wiqci authors: Hermes, Luca; Hammer, Barbara; Melnik, Andrew; Velioglu, Riza; Vieth, Markus; Schilling, Malte title: A Graph-based U-Net Model for Predicting Traffic in unseen Cities date: 2022-02-11 journal: nan DOI: nan sha: f25b2143bf5a17b8228ddce967d6276fdfe04b12 doc_id: 151463 cord_uid: v95wiqci Accurate traffic prediction is a key ingredient to enable traffic management like rerouting cars to reduce road congestion or regulating traffic via dynamic speed limits to maintain a steady flow. A way to represent traffic data is in the form of temporally changing heatmaps visualizing attributes of traffic, such as speed and volume. In recent works, U-Net models have shown SOTA performance on traffic forecasting from heatmaps. We propose to combine the U-Net architecture with graph layers which improves spatial generalization to unseen road networks compared to a Vanilla U-Net. In particular, we specialize existing graph operations to be sensitive to geographical topology and generalize pooling and upsampling operations to be applicable to graphs. In this work, we present our results from participating in the Traffic4Cast Challenge 2021. The competition aims to predict the development of traffic volume and speed up to one hour into the future. This task is challenging due to the stochastic nature of moving cars and complex spatio-temporal dependencies. The Traffic4Cast challenge in this year was subdivided into two challenges with a focus on temporal and spatial transfer. The exact settings are described in detail in Sec. II. Our approach to this problem is inspired by the wellknown U-Net architecture [Ronneberger et al., 2015] , as used frequently in the scope of the competition. Although U-Net models have shown to perform well, transfer to unseen cities has been difficult for such convolution-based approaches. Martin et al. [2020] provided empirical evidence that graphbased models generalize better to unseen cities as these allow to leverage prior knowledge about the street network. Thus, instead of relying on visual convolutions (CNN), we apply graph neural networks (GNN) to integrate local traffic information using a road graph. A drawback of using GNNbased approaches is limited control over the receptive field. Expansion of the receptive field onto larger areas of the graph requires deeper models, however, it has been shown by Zhou et al. [2020] that performance of GNNs drops at a certain depth. We therefore adapted the visual pooling and upsampling operation in a way that they can account for long-range spatial relations between different areas in the road graph. To enable reproducibility, the code will be made publicly available 1 . 1 https://github.com/LucaHermes/graph-UNet-traffic-prediction II. CHALLENGE DETAILS AND DATA The traffic data is given as a two-dimensional heat-maplike image representation of size 495 × 436 pixels with eight channels. The channels of the pixel values correspond to directional traffic speed and volume information binned into four discrete directions (north-east, south-east, south-west, and north-west) . The data was sampled in a five minute interval and was gathered from a fleet of probe vehicles. The measurements are mapped onto a pixel grid using GPS information to represent traffic movies as shown in Fig. 1 . Data was collected over two years (2019 and 2020) in ten different cities around the world. The traffic situation between 2019 and 2020 is subject to temporal shift caused by the ongoing COVID-19 pandemic which poses a particular challenge for transfer of models. In addition to the traffic movies, a road graph was generated from high-resolution images of the respective street network. The nodes in the graph correspond to pixels that belong to a street and an edge exists between two nodes if the corresponding pixels are adjacent and belong to a street. Furthermore, static street map images with a resolution similar to the traffic movies are also included for each city. These street maps are one-channel images, where the pixel intensities roughly correspond to street size. This year's challenge is divided into a core challenge and an extended challenge. The core challenge puts a focus on temporal generalization regarding the domain shift caused by COVID, while the extended challenge puts a focus on spatial generalization to unseen cities. Participants were invited to compete in both independently. Four different subsets are derived from the data (s. Tab. I) and define the two challenges. Specifically, both challenges are using the same training dataset (subsets C1 and C2), but different test sets. The evaluation of the core challenge focuses on generalization from pre-COVID training data of 2019, to the test set C3 recorded during COVID in 2020. The extended challenge focuses on generalization across cities. Model evaluation for this challenge uses the test set C4 that consists of data from two cities that were excluded from the training set. The task in both challenges is to predict the traffic 5, 10, 15, 30, 45, and 60 mins into the future. Our architecture is based on the classical U-Net model [Ronneberger et al., 2015] , which originally relies on series of two-dimensional visual convolutions. Thus, intermediate feature maps depend not only on the traffic data but also on city-specific empty areas in the traffic movies. As the road graph provides more specific topological information than just a regular pixel grid, we generalize this model to graphs by applying GNN layers instead of visual convolutions (CNN). As a consequence, empty areas are excluded from computations. Thereby, these areas cannot affect the downstream latent representations, which we assume as beneficial for cross-city generalization. In contrast to CNNs, GNNs cannot capture the geographical topology of a node neighborhood due to the permutation invariant accumulation functions. For example, a regular GNN cannot distinguish whether a neighboring node lies to the north or the south. Invariance to geographical topologies seems like a major drawback, as the traffic features are directional. We propose a straightforward method to mitigate this drawback and specialize a GNN to be sensitive to these geographical neighborhood topologies (s. Sec. III-B). As the edges in the road graph only ever connect nodes that are also adjacent in the image space, the graph diameter (maximum distance between pixels on the graph) is d ≥ 495+436 = 931. A singlelayer GNN can explore a 1-hop neighborhood, thus, long-range relations between nodes could only be exploited using a very deep GNN. However, it has been shown by Zhou et al. [2020] GNN that common GNNs don't scale well with model depth and tend to oversmooth in such cases. To still allow information exchange over the whole graph, we include a global state vector in our GNN layers. This vector can be understood as being adjacent to every node in the graph. Furthermore, we leverage the unique position in the 2D pixel grid of each node, to design down-and upsampling operations (s. Sec. III-C), and thus to expand the receptive field of the GNN. Following the U-Net schematic depicted in Fig. 2a , we arrange these operations in a down-and an upsampling branch that are additionally connected via skip-connections. The GNNs in the downsampling branch consist of a single layer, whereas the GNNs in the upsampling branch consist of two layers. We will first describe the different types of features used in our layers and then the computations inside the layers. The input to our model is composed of the given road graph and the static street map images. The road graph is extended by edge feature vectors e k ∈ E and one global feature vector u. The node features v i ∈ V correspond to the pixel values of the traffic movies, where the individual frames are concatenated into a single vector per node. Speed and volume information is scaled down to values between 0 and 1, i.e. divided by 255. To initialize edge features, a two-layer CNN generates a feature map with eight channels from the normalized static street map S ∈ R 1×h×w×1 . This results in an additional set of node features V and by concatenating sender and receiver nodes from V yield the edge features where v sk and v rk are the sender and receiver node of edge k, respectively, and · · denotes concatenation. The global state is computed by summing up the node features V and scaling them by a constant λ = 1 × 10 −5 . This factor has to be included, as the sum over all nodes can be large and scaling by λ showed empirically to produce suitable numbers. Next, time and weekday information is encoded in t enc and d enc , respectively, and concatenated to the global state vector. Time t is encoded as a 2D position on the unit circle, where the 24-hour interval corresponds to one full revolution and weekday is one-hot encoded. As basis of our graph layer, we use the full GN block proposed by Battaglia et al. [2018] , as shown in Fig. 2b . It is composed of parameterized update functions φ that are applied to update node, edge, and global graph features, as well as unparameterized functions ρ that accumulate sets. These functions are implemented as where ∀ai denotes vector concatenation over all input vectors a i , W is the weight matrix and b is the bias vector. ReLU [Agarap, 2018] is used as the activation function. Fig. 3 visualizes the computations conceptually. We adapt the full GN block to make the computations sensitive to the local neighborhood topology (adaptations highlighted in Fig. 2b) . Specifically, we split the road graph into four subgraphs g ∈ G. As shown in Fig. 4 the subgraphs each contain edges directed into one of the four quadrants north-west, north-east, south-east and south-west. We choose these four subgraphs, as they reflect the partitioning of the directional traffic speed and volume information as given in the node features. Note that each subgraph uses the same node features. For each subgraph g, separate edge transformations φ e g compute the updated edge features e g,k for each edge k in graph g. Then, the updated node features v i are computed by concatenating the edge features of the four subgraphs and for each node v i ∈ V accumulating the incident edges. Finally, the global state vector is updated using the accumulated node and edge features, as well as the prior global state vector as inputs. The update functions are where v sk and v rk are the sender and receiver node of edge k, respectively, and N i denotes the neighborhood of node i. The outputs of the graph layer are the new state vectors for nodes v i , edges e k and the global features u . While down-and upsampling operations on graphs are not straightforward, in the current setting we can exploit the topological information that locates nodes in the pixel grid. Thus, existing visual pooling and upsampling methods can be adapted to work on the street graph. Our downsampling operation directly corresponds to a regular max-pooling with a kernel size and a stride of 2. Specifically, we partition the set of nodes V according to their position in the 2D grid in a way that each partition contains the nodes in a 2 × 2 window and take the feature-wise maximum. Hence, each partition is condensed into a new node, resulting in a new set of nodes V . An edge connects two nodes u and v , if any two nodes in the corresponding pooling windows were connected by an edge. If multiple such connections are present, the feature-wise The node features in the upsampled graph are generated via a graph layer using the upsampling graph (middle). For better visibility, the input graph has been moved by 0.5 in x-and y-direction. The upsampling operation relies on a given input graph G i and a target graph structure G t . Fig. 6 visualizes our upsampling method. First, n t zero-initialized nodes are introduced to the input graph, where n t denotes the number of nodes in G t . Second, the position of input nodes is scaled by a factor of two. Then, we partition the nodes by their position in the two dimensional space in a way that each partition contains the nodes in a 2 × 2 window, like in the downsampling case. Next, edges are created by connecting nodes in G i to all nodes in G t that are in the same partition. This creates an upsampling graph as shown in Fig. 6 (middle) , effectively connecting G i with G t . Our adapted GNN is applied to the upsampling graph to propagate information from G i to G t . This results in new node features for the upsampled graph. Note that the GNN uses the same edge partitioning as described in Sec. III-B. Hence, the operation is sensitive to the relative neighborhood topology and will therefore produce different values in the receiving nodes, even if the sending node is the same. We train our model for 800k steps on the provided training data on a standard MSE loss using the ADAM optimizer [Kingma and Ba, 2017] . The learning rate schedule contains 2k steps of warm-up. After warm-up, the learning rate is 0.002 and decays from there exponentially at a rate of 0.98 every 100 steps. The minimal learning rate is 0.0002. At each step, we sample valid starting frames for the seed sequences (model input), which are frames that correspond to a time between 00:00 and 22:00 o'clock. We take the average over the gradient of 16 successive samples to update the model parameters. This effectively corresponds to taking a batch size of 16. We submitted the exact same model to both competitions. Hence, we consider the temporal generalization problem from the core challenge as a kind of spatial generalization problem as well. This is possible as data from 2020 is included in the training set, just for different cities than used in the evaluation. We evaluate our model using two different evaluation datasets. As the challenge test set is not openly available, we split the dataset and use a portion of the original training data only during model evaluation. The first evaluation set is a fraction of the training set (≈ 0.4%) selecting specific points in time. Specifically, for each day in April 2019 (30 days), we sample the seed data at every full hour between 00:00 and 22:00 for each of the eight given cities. Additionally, for exactly the same samples, we flip the data vertically and horizontally. This means that we flip the static images, as well as the dynamic traffic movies, and also rearrange the data channels accordingly. This results in a second dataset (denoted as mirrored) that is used to evaluate spatial generalization to new cities. The evaluation metric corresponds to the MSE between prediction and target image that are both scaled by 255 to match the original scale of the given data. Fig. 7 shows predictions and ground truth images for Berlin. The color values are log-scaled. Fig. 8 shows the MSE by city (left) and the MSE by time (right) for both evaluation sets. The average performance is also included (left plot; dashed lines). A high variation of performance across cities is observable. Furthermore, our model consistently performs better than average on the cities [Martin et al., 2020] 51.714 61.461 0.841 Vanilla U-Net [Ronneberger et al., 2015] 51.283 --Hybrid U-Net (ours) 50.521 60.222 0.839 U-Net + multi-task (T) [Lu, 2021] 48.422 --U-Net + multi-task (ST) [Lu, 2021] -59.586 -U-Net Ensemble (T) [Choi, 2019] 48.494 --U-Net Ensemble (ST) [Choi, 2019] -59.559 -Melbourne and Barcelona, but consistently worse on the cities Istanbul, Moscow and Berlin. The right plot in Fig. 8 shows that samples drawn at different points in time of a day cause variation in performance. Overall, predictions of samples drawn at nighttime, between 01:00 and 04:00 show a smaller error compared to predictions of samples drawn during daytime. This is probably due to the overall traffic activity, that is lower at nighttime. The difference in performance measured on the two evaluation sets is very small, which demonstrates nicely the ability of the model to generalize to novel spatial situations. Table II shows an excerpt of the leaderboard of the competition. The naive average corresponds to a model that calculates the average over the input frames and outputs the result for all future frames. Graph-ResNet by Martin et al. [2020] was used as a more sophisticated baseline by the competition hosts. The graph-creation method was changed from using the dynamic data to build the graph, to using high-resolution street maps. The Graph-ResNet was trained for a single epoch only on traffic data from Berlin. We ranked seventh place in the core-and fourth place in the extended challenge. In both, our model outperformed the baselines significantly. A comparison of performance between the two challenges is difficult, as different cities are used. However, we can compare the score ratio of our model to the baselines (s. relative score in Tab. II). the Graph-ResNet has a relative score of 0.841, whereas our proposed model has 0.839 relative score, which suggests better spatial generalization. To assess the effectiveness of the directional subgraphing, we train a model that is applied to the complete input graph instead of the four subgraphs. This is done simply by removing our modifications from the full GN Block. Thereby, the graph operations become locally permutation invariant, instead of being sensitive to the neighborhood topology. We refer to this simplified version of our model as Graph U-Net. Table III shows the MSE measured on the evaluation dataset consisting of cities from the training set and on the mirrored evaluation dataset (denoted as MSE*). It can be observed that our Hybrid U-Net consistently outperforms the Graph U-Net on the evaluation dataset, whereas the results measured on the mirrored data are very similar for the two models. To quantify spatial generalization, we compute the ratio between MSE and MSE* (denoted as rel. MSE). A relative MSE of 1.0 means that the model produces exactly the same error on the mirrored cities as on the known cities. This would indicate that the performance is not impacted by the flipping of the data which corresponds to perfect spatial generalization. The average rel. MSE measured for the Graph U-Net is consistently larger compared to the rel. MSE of our proposed Hybrid U-Net model. This indicates that Graph U-Net generalizes better to new cities than our Hybrid U-Net model. Table III also shows the results for a vanilla U-Net. The vanilla U-Net consists of 8 consecutive down-and upsampling blocks, respectively. The MSE measured on the evaluation set is very close to our proposed model, but on average performs slightly worse. In contrast the MSE* measured on the flipped cities is significantly worse, which is evidence for worse spatial generalization capabilities. This is further supported by the considerably worse relative MSE found for the vanilla U-Net. The given problems in the traffic forecast challenge have been approached following two different routes. Either using a visual model to process whole frames of the traffic movies, or using a GNN and process only pixels that actually depict a road. Intuitively, a graph-based approach is leveraging prior knowledge on the underlying structure of the street network, which should provide better generalization and transfer. Additionally, areas without a street are excluded from the graph and therefore don't explicitly contribute to the prediction. This is intuitively beneficial as these areas don't contain traffic information. This principle has already been demonstrated by Martin et al. [2020] . But as a drawback, a purely graph based approach is losing information on directionality which is crucial for the traffic forecasting challenge as the data is provided in such a format. Here, we introduced a U-Net architecture with graph layers that we adapted to be sensitive to the geographical neighborhood topology by splitting the roadgraph into four directional dependent subgraphs. Furthermore, we utilize the 2D node position for graph down-and upsampling which effectively expands the receptive field and allows inference based on a larger portion of the road network. Although we have shown that the approach works in general, we did not perform extensive hyperparameter tuning, yet. Investigating the difference in performance between the different cities, refining the proposed up-and downsampling layers and scaling the complexity of the model are promising directions to explore in future work. Convolutional Networks for Biomedical Image Segmentation Graph-resnets for short-term traffic forecasts in almost unknown cities Effective training strategies for deep graph neural networks. CoRR, abs Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks Deep learning using rectified linear units (relu). ArXiv Adam: A Method for Stochastic Optimization Learning to transfer for traffic forecasting via multi-task learning. CoRR, abs/2111.15542 Traffic map prediction using unet based deep convolutional neural network. CoRR, abs/1912.05288