Consider a set of time series X = {x1, x2, · · · , xN} where each time series instance xi has dimensions T × F , with T representing the sequence length and F denoting the feature dimension. To enhance feature extraction and capture dependencies across sequences, an attention mechanism Att(xi; θ) is integrated into the dilated CNN module. Concurrently, a Generative Ad- versarial Network (GAN) augments the dataset X with synthetic samples x̃i = G(zi; θ), where zi ∼ P (z), enriching X̃ = X ∪ {x̃1, x̃2, ..., x̃M} for im- proved model generalization. Following this, we obtain the representation vectors ri = {ri,1, ri,2, · · · , ri,T} for each timestamp t.