Vessel trajectory prediction using AIS data plays an important role in maritime navigation warning and safety. A key aspect of trajectory prediction is multimodal because of the uncertainty of vessel behavior. However, complex trajectory modes are difficult to be learned from low-dimensional AIS data with noise. \siyangIn this paper, we propose a new method for multimodal vessel trajectory prediction, called Multimodal Vessel Trajectory Prediction via Modes Distribution Modeling (VT-MDM). This approach addresses the above challenges by introducing additional hiding regimes to characterize complex trajectory modes independently. Specifically, we introduce an additional latent vector as the encoding of the trajectory modes, which is randomly sampled from a multivariate Gaussian distribution to generate multiple predicted trajectories. \siyangTo enable this Gaussian distribution for capturing the vessel trajectory modes, we use adversarial learning to enforce all its realizations to generate realistic predicted trajectories. Furthermore, we also encourage the mapping between the latent vectors of the modes and the predicted trajectories to be invertible and smooth, which prompts VT-MDM to produce truly and gradually multimodal predicted trajectories. Experiments on the real AIS dataset show that our method is capable of multimodal trajectory prediction with high accuracy.
Out-of-Distribution Generalization of Federated Learning via Implicit Invariant Relationships
Yaming Guo, Kai Guo, Xiaofeng Cao, and
2 more authors
Out-of-distribution generalization is challenging for non-participating clients of federated learning under distribution shifts. A proven strategy is to explore those invariant relationships between input and target variables, working equally well for non-participating clients. However, learning invariant relationships is often in an explicit manner from data, representation, and distribution, which violates the federated principles of privacy-preserving and limited communication. In this paper, we propose \textscFedIIR, which implicitly learns invariant relationships from parameter for out-of-distribution generalization, adhering to the above principles. Specifically, we utilize the prediction disagreement to quantify invariant relationships and implicitly reduce it through inter-client gradient alignment. Theoretically, we demonstrate the range of non-participating clients to which \textscFedIIR is expected to generalize and present the convergence results for \textscFedIIR in the massively distributed with limited communication. Extensive experiments show that \textscFedIIR significantly outperforms relevant baselines in terms of out-of-distribution generalization of federated learning.