Course Material to Learn GIN
Learn GIN, a MP-GNN as powerful as the WL graph isomorphism test. Find out why GIN cannot classify CSL graphs and discover the code and paper to learn more.
Xavier Bresson
Prof @NUSingapore, #NRF Fellow, #GraphNNs #DeepLearning #AI #Research #Teaching #Innovation #Advisory Opinions my own
-
Course material to learn GIN, a MP-GNN as powerful as the WL graph isomorphism test.
— Xavier Bresson (@xbresson) June 9, 2023
Slides: https://t.co/8KbWGdCuUy
Code: https://t.co/Tz43z2H5J5
Paper: https://t.co/2pk8habijJ
A student asked me why GIN cannot classify CSL graphs? pic.twitter.com/xz1FeFLZuu -
CSL graphs are cycle graphs with skip links. These highly symmetric graphs can only be distinguished with techniques strictly more powerful than the WL test. pic.twitter.com/9ZILMNI1Pl
— Xavier Bresson (@xbresson) June 9, 2023 -
GIN is provable to be at most as powerful as WL. It is thus unable to distinguish CSL graphs with simple node features, as shown in the code above.
— Xavier Bresson (@xbresson) June 9, 2023 -
Sounds bad news as most MP-GNNs are not even as WL powerful as GIN -- so the question is whether the MP-GNN paradigm is doomed to be unexpressive and useless?
— Xavier Bresson (@xbresson) June 9, 2023 -
No! @brunofmr et-al showed the WL expressivity of GIN can be boosted by providing node positional encoding. Importantly, all MP-GNNs can also be boosted to go beyond WL expressivity (experimental observation, no proof).
— Xavier Bresson (@xbresson) June 9, 2023 -
Then, what choice for node PE? A natural choice is the node index, but there are N! possible permutations, and the network can only see a few during training, so generalization is limited (27% success).
— Xavier Bresson (@xbresson) June 9, 2023 -
We proposed Laplacian eigenvectors as node PE in 2020, but there is a sign ambiguity with 2^K possible flips, K being #eigenvectors. However, K is typically small, so the network can learn this invariance and has good empirical performance with 100% generalization success on CSL.
— Xavier Bresson (@xbresson) June 9, 2023 -
A last observation on the expressivity power of GIN. The WL expressivity proof is an existence result -- it is guaranteed that there exist parameters that make GIN as powerful as WL. But there is *no* guaranty that SGD will find these parameters.
— Xavier Bresson (@xbresson) June 9, 2023 -
This caveat is true for GIN and most expressive (graph) neural networks. Except for very rare cases like NTK and deep linear networks, it is not known how to prove that SGD converges to some specific solutions to any kind of networks!
— Xavier Bresson (@xbresson) June 9, 2023