Learning on Graphs Conference 2023 compressed

This page contains summaries of all LoG 2023 accepted papers generated by the compressor, my personal LLM-based project.


Spectral Subgraph Localization

Ama Bembua Bainson, Judith Hermanns, Petros Petsinis, Niklas Aavad, Casper Dam Larsen, Tiarnan Swayne, Amit Boyarski, Davide Mottin, Alex M. Bronstein, Panagiotis Karras

https://openreview.net/forum?id=zrOMpghV0M

Keywords: spectral methods; subgraph localization; subgraph isomorphism; optimization

Compressor summary: The paper proposes a method to find the best match position of a query graph Q in a given graph G by aligning their Laplacian spectra and improves its stability using bagging strategies, while postponing the exact node correspondence task.


GwAC: GNNs with Asynchronous Communication

Lukas Faber, Roger Wattenhofer

https://openreview.net/forum?id=zffXH0sEJP

Keywords: GNNs, Weisfeiler-Lehman, Oversmoothing, Undereaching

Compressor summary: The paper introduces an asynchronous communication framework for graph neural networks that preserves their expressiveness and improves performance on several graph learning tasks.


GSCAN: Graph Stability Clustering for Applications with Noise using Edge-Aware Excess-of-Mass

Etzion Harari, Naphtali Abudarham, Roee Litman

https://openreview.net/forum?id=xazYC6pGO5

Keywords: Graph, Clustering, GNN, Unsupervised, Stability

Compressor summary: The paper proposes GSCAN, a graph clustering method based on node features and graph structure that maximizes cluster stability, resists outliers, and works well with Graph Neural Networks (GNN).


Latent Space Representations of Neural Algorithmic Reasoners

Vladimir V Mirjanic, Razvan Pascanu, Petar Veličković

https://openreview.net/forum?id=tRP0Ydz5nN

Keywords: machine learning, graph neural networks, neural algorithmic reasoning, latent spaces, algorithms

Compressor summary: The paper analyzes the latent space structure of Graph Neural Networks (GNNs) for executing classical algorithms and proposes improvements to handle loss of resolution and out-of-range values using a softmax aggregator and decaying the latent space.


Topological Graph Signal Compression

Guillermo Bernardez, Lev Telyatnikov, Eduard Alarcon, Albert Cabellos-Aparicio, Pere Barlet-Ros, Pietro Lio

https://openreview.net/forum?id=rqp8NfM7Tn

Keywords: Topological Deep Learning, Graph Neural Networks, Compression

Compressor summary: The paper introduces a new Topological Deep Learning method for compressing signals over graphs by inferring higher-order structures and passing messages within them, achieving significant improvements in reconstructing temporal link signals from real-world networks.


Where Did the Gap Go? Reassessing the Long-Range Graph Benchmark

Jan Tönshoff, Martin Ritzert, Eran Rosenbluth, Martin Grohe

https://openreview.net/forum?id=rIUjwxc5lj

Keywords: Graph Neural Networks, Message Passing, Graph Transformers, Long-Range Graph Benchmark

Compressor summary: Graph Transformers and Message Passing GNNs have similar performance on long-range interaction tasks when properly optimized, and some issues were found in LRGB's datasets and metric.


Representing Edge Flows on Graphs via Sparse Cell Complexes

Josef Hoppe, Michael T Schaub

https://openreview.net/forum?id=qix189lq5D

Keywords: graph signal processing, topological signal processing, cell complexes, topology inference

Compressor summary: The paper proposes a method to obtain sparse and interpretable representations of edge flows in graphs using cellular complexes and an efficient approximation algorithm for the resulting flow representation learning problem.


Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets

Jiale Yan, Hiroaki Ito, Ángel López García-Arias, Yasuyuki Okoshi, Hikari Otsuka, Kazushi Kawamura, Thiem Van Chu, Masato Motomura

https://openreview.net/forum?id=oLrNolMbO8

Keywords: Graph neural networks, Lottery ticket hypothesis, Recurrent neural networks, Pruning

Compressor summary: This paper explores subnetworks in graph neural networks (GNNs) using pruning methods and shows that sparse GNNs can achieve competitive performance and high memory efficiency.


PyTorch Geometric Signed Directed: A Software Package on Graph Neural Networks for Signed and Directed Graphs

Yixuan He, Xitong Zhang, Junjie Huang, Benedek Rozemberczki, Mihai Cucuringu, Gesine Reinert

https://openreview.net/forum?id=mni7vnYmvY

Keywords: graph neural networks, open-source library, signed networks, directed networks, machine learning

Compressor summary: The paper introduces PyTorch Geometric Signed Directed (PyGSD), a software package for graph neural networks on signed and directed networks, with easy-to-use models, data, metrics, and evaluation methods.


SURF: A Generalization Benchmark for GNNs Predicting Fluid Dynamics

Stefan Künzli, Florian Grötschla, Joël Mathys, Roger Wattenhofer

https://openreview.net/forum?id=lf22LaheVr

Keywords: generalization, fluid dynamics, benchmark, physics simulation

Compressor summary: SURF is a benchmark to test the generalization of learned graph-based fluid simulators by providing performance and generalization metrics for evaluating different models.


Transformers over Directed Acyclic Graphs

Yuankai Luo, Veronika Thost, Lei Shi

https://openreview.net/forum?id=kkOSWva0Fx

Keywords: Graph Neural Networks, Transformers, Graph Classification, Node Classification, Scalability

Compressor summary: The paper proposes efficient attention and positional encoding adaptations for transformer models on directed acyclic graphs (DAGs) that improve their performance over other methods.


Geometric instability of graph neural networks on large graphs

Borun Shi, Emily Morris, Haotian Shen, Weiling Du, Muhammad Hamza Sajjad

https://openreview.net/forum?id=kQHZfyL2XM

Keywords: embedding instability, geometric instability, large graphs, graph neural networks

Compressor summary: The paragraph discusses a new method (Graph Gram Index) to measure geometric instability in graph neural network embeddings, which is invariant to various transformations and can be used on large graphs.


FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training

Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang

https://openreview.net/forum?id=iyJjEkU0Ve

Keywords: GNN; Performance; Data loading;

Compressor summary: The system is a framework for training GNN models faster and more efficiently by using a historical cache of node embeddings instead of recomputing them from scratch every time.


Point-wise Activations and Steerable Convolutional Networks

Marco Pacini, Xiaowen Dong, Bruno Lepri, Gabriele Santin

https://openreview.net/forum?id=gsJPYzdA0S

Keywords: Deep Learning, Equivariant Neural Networks, Steerable Neural Networks

Compressor summary: The paper investigates when point-wise activations can be used in equivariant neural networks and shows their limitations, highlighting the need for more research on better activation functions.


Meta-Path Learning for Multi-relational Graph Neural Networks

Francesco Ferrini, Antonio Longa, Andrea Passerini, Manfred Jaeger

https://openreview.net/forum?id=gW9ZmT9hAe

Keywords: Graph Neural Network, Meta-path, Knowledge graph

Compressor summary: The paper proposes a new method for learning informative relations in graph neural networks using a scoring function and a small set of meta-paths, which performs better than existing approaches on various datasets.


Three Revisits to Node-Level Graph Anomaly Detection: Outliers, Message Passing and Hyperbolic Neural Networks

Jing Gu, Dongmian Zou

https://openreview.net/forum?id=fNsU9gi1Fy

Keywords: anomaly detection, graph, message passing, hyperbolic neural networks

Compressor summary: The paper proposes new methods and comparisons for detecting abnormal instances in complex networks using deep learning techniques and hyperbolic neural networks.


HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers

Maciej Besta, Afonso Claudino Catarino, Lukas Gianinazzi, Nils Blach, Piotr Nyczyk, Hubert Niewiadomski, Torsten Hoefler

https://openreview.net/forum?id=edAX8h5mdA

Keywords: Dynamic Graph Representation Learning, Higher-Order Graph Representation Learning, Transformer, Block-Recurrent Transformer

Compressor summary: The paper proposes HOT, a model that uses higher-order graph structures to improve dynamic link prediction accuracy while minimizing memory usage by imposing hierarchy on the attention matrix of a Transformer.


Enhancing Molecular Property Prediction with Auxiliary Learning and Task-Specific Adaptation

Vishal Dey, Xia Ning

https://openreview.net/forum?id=eR7wBTSF2u

Keywords: Graph Neural Networks, Auxiliary Learning, Molecular Property Prediction, Transfer Learning, Adaptation

Compressor summary: The authors propose methods to adapt pretrained Graph Neural Networks (GNNs) for molecular property prediction tasks by jointly training them with multiple auxiliary tasks, which can improve generalization and suggest future research directions.


Explaining Link Predictions in Knowledge Graph Embedding Models with Influential Examples

Adrianna Janik, Luca Costabello

https://openreview.net/forum?id=eOwYHXDaHn

Keywords: explainable ML, link prediction, knowledge graph embeddings

Compressor summary: The article proposes an example-based method to generate explanations for link predictions in Knowledge Graph Embedding models using the latent space representation of nodes and edges.


Generalized Reasoning with Graph Neural Networks by Relational Bayesian Network Encodings

Raffaele Pojer, Andrea Passerini, Manfred Jaeger

https://openreview.net/forum?id=dxhasYAMQ4

Keywords: Graph neural networks, statistical relational learning, relational Bayesian networks, neuro-symbolic integration, explanation

Compressor summary: The paper proposes embedding graph neural networks in a statistical relational learning framework for generating models that support various queries and provide explanations for graph data.


Non-Isotropic Persistent Homology: Leveraging the Metric Dependency of PH

Vincent Peter Grande, Michael T Schaub

https://openreview.net/forum?id=cewQK9Sjvh

Keywords: Topology, Point Clouds, Geometry Processing, Persistent Homology, Metric Spaces, Simplicial Complexes, Optimal Transport

Compressor summary: The paper proposes using different distance functions for persistent homology analysis to reveal more topological and geometrical information from point clouds.


Transferable Hypergraph Neural Networks via Spectral Similarity

Mikhail Hayhoe, Hans Matthew Riess, Michael M. Zavlanos, VICTOR PRECIADO, Alejandro Ribeiro

https://openreview.net/forum?id=cHuii4NOB9

Keywords: hypergraphs, graph neural networks, graph signal processing, spectral graph theory, hypergraph Laplacian, graph diffusion

Compressor summary: The text introduces a new neural network model called HENNs that can process signals on hypergraphs using graph neural networks, and shows its effectiveness in transferring knowledge between multiple graph representations.


Asynchronous Algorithmic Alignment with Cocycles

Andrew Joseph Dudzik, Tamara von Glehn, Razvan Pascanu, Petar Veličković

https://openreview.net/forum?id=ba4bbZ4KoF

Keywords: algorithmic reasoning, graph neural networks, category theory, bellman-ford, commutative monoids, idempotence, cocycles, monoid homomorphisms, dynamic programming

Compressor summary: The paper proposes a way to improve neural algorithmic reasoners by separating node state update and message function invocation in graph neural networks, enabling asynchronous computation and reducing irrelevant data transmission.


Mitigating Over-smoothing and Over-squashing using Augmentations of Forman-Ricci Curvature

Lukas Fesser, Melanie Weber

https://openreview.net/forum?id=bKTkZMRtfC

Keywords: Graph Neural Networks, Discrete Curvature, Over-smoothing, Over-squashing

Compressor summary: The paper introduces a new rewiring technique for graph neural networks that uses scalable Augmented Forman-Ricci curvature to characterize and mitigate over-smoothing and over-squashing effects, achieving state-of-the-art performance with reduced computational cost.


Extending Graph Neural Networks with Global Features

Andrei Dragos Brasoveanu, Fabian Jogl, Pascal Welke, Maximilian Thiessen

https://openreview.net/forum?id=aisVQy6R2k

Keywords: graph neural networks, message passing neural networks, expressivity, topological index

Compressor summary: The authors propose a method to enhance message passing graph neural networks by incorporating global graph features, which they show can improve predictive performance on molecular benchmark datasets.


Demystifying Oversmoothing in Attention-Based Graph Neural Networks

Xinyi Wu, Amir Ajorlou, Zihui Wu, Ali Jadbabaie

https://openreview.net/forum?id=aTw3Mu2VA2

Keywords: graph neural networks, oversmoothing, dynamical systems, representation power, theory

Compressor summary: This paper studies how the graph attention mechanism in Graph Neural Networks affects oversmoothing and expressive power, using tools from matrix theory and showing that it cannot prevent oversmoothing.


Interaction Models and Generalized Score Matching for Compositional Data

Shiqing Yu, Mathias Drton, Ali Shojaie

https://openreview.net/forum?id=aRUhkrf0W4

Keywords: Compositional data, Graphical model, High-dimensional statistics, Interaction, Sparsity

Compressor summary: The authors propose a class of exponential family models for compositional data with pairwise interactions and develop estimation methods using generalized score matching.


Will More Expressive Graph Neural Networks do Better on Generative Tasks?

Xiandong Zou, Xiangyu Zhao, Pietro Lio, Yiren Zhao

https://openreview.net/forum?id=aBL9SfWVJb

Keywords: Graph Neural Networks, Expressiveness, Graph Generative Models, De-novo Molecular Design

Compressor summary: The authors explore different Graph Neural Networks (GNNs) for improving molecular graph generation tasks and compare their performance with various generative models and metrics.


How Faithful are Self-Explainable GNNs?

Marc Christiansen, Lea Villadsen, Zhiqiang Zhong, Stefano Teso, Davide Mottin

https://openreview.net/forum?id=ZS2t7ZSh8E

Keywords: deep learning, explainability, graph neural networks, self-explainable models, concepts

Compressor summary: Self-explainable graph neural networks aim to provide faithful explanations for their reasoning on graph data, but face challenges in fulfilling this goal and improving the evaluation of these models.


Inferring dynamic regulatory interaction graphs from time series data with perturbations

Dhananjay Bhaskar, Daniel Sumner Magruder, Edward De Brouwer, Aarthi Venkat, Frederik Wenkel, Matheo Morales, Guy Wolf, Smita Krishnaswamy

https://openreview.net/forum?id=ZObhwMbBA9

Keywords: regulatory network inference, graph ODE, attention, dynamics

Compressor summary: RiTINI is a novel method for inferring dynamic interaction graphs in complex systems using space-and-time graph attentions and graph neural ODEs, outperforming previous methods on various simulations.


Explaining Unfairness in GNN-based Recommendation

Ludovico Boratto, Francesco Fabbri, Gianni Fenu, Mirko Marras, Giacomo Medda

https://openreview.net/forum?id=YuOwqCnPIc

Keywords: Recommender Systems, User Fairness, Explanation, Graph Neural Networks, Counterfactual Reasoning

Compressor summary: The paper proposes a new algorithm that uses counterfactual methods to explain user unfairness in graph neural network-based recommendation systems by perturbing the graph structure.


Intrinsically Motivated Graph Exploration Using Network Theories of Human Curiosity

Shubhankar Prashant Patankar, mathieu ouellet, Juan Cervino, Alejandro Ribeiro, Kieran A. Murphy, Danielle Bassett

https://openreview.net/forum?id=XJpQnN4JNE

Keywords: intrinsic motivations, human curiosity, reinforcement learning, graph neural networks

Compressor summary: The paper proposes a new method for exploring graph-structured data based on human curiosity theories, which improves reinforcement learning and recommender system performance.


RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks

Jiaxing Zhang, Zhuomin Chen, hao mei, Dongsheng Luo, Hua Wei

https://openreview.net/forum?id=WZUH0fMbzb

Keywords: Graph Neural Networks, Graph Explanation, Graph Regression

Compressor summary: The paper proposes XAIG-R, a novel explanation method for graph regression models that addresses distribution shifting, continuous decision boundary issues, and uses information bottleneck theory, mix-up framework, and contrastive learning to support various GNNs.


WL meet VC

Christopher Morris, Floris Geerts, Jan Tönshoff, Martin Grohe

https://openreview.net/forum?id=WYWU9aZmkX

Keywords: GNNs, generalization, expressivity, Weisfeiler-Leman, VC dimension

Compressor summary: The paper explores how graph neural networks' generalization performance can be analyzed using Vapnik-Chervonenkis dimension theory, linking it to the Weisfeiler-Leman algorithm and deriving upper bounds based on the number of colors or distinguishable graphs.


EMP: Effective Multidimensional Persistence for Graph Representation Learning

Yuzhou Chen, Ignacio Segovia-Dominguez, Cuneyt Gurcan Akcora, Zhiwei Zhen, Murat Kantarcioglu, Yulia Gel, Baris Coskunuzer

https://openreview.net/forum?id=WScCJnX4ek

Keywords: multiparameter persistence, persistent homology, topological data analysis, graph classification, graph representation learning

Compressor summary: The Effective Multidimensional Persistence framework allows for the simultaneous analysis of data using multiple scale parameters, providing a more comprehensive and expressive summary that improves performance in graph classification tasks.


The Self-loop Paradox: Investigating the Impact of Self-Loops on Graph Neural Networks

Moritz Lampert, Ingo Scholtes

https://openreview.net/forum?id=Urf6G7rk8A

Keywords: GNNs, Message Passing, Self-loops, Node Classification, Graph Ensembles

Compressor summary: The paper studies how the presence of self-loops in graph neural networks affects information flow, especially in odd vs even layers, and calls this effect the "self-loop paradox".


Over-squashing in Riemannian Graph Neural Networks

Julia Balla

https://openreview.net/forum?id=UUnYi0yLcM

Keywords: Graph Neural Network, Over-squashing, Riemannian Manifold, Graph Embedding

Compressor summary: The paper explores using Riemannian manifolds with variable curvature to reduce over-squashing in graph neural networks by preserving the geometry of the graph's topology.


Beyond Erdos-Renyi: Generalization in Algorithmic Reasoning on Graphs

Dobrik Georgiev Georgiev, Pietro Lio, Jakub Bachurski, Junhua Chen, Tunan Shi, Lorenzo Giusti

https://openreview.net/forum?id=TTxQAkg9QG

Keywords: graph neural networks, algorithmic reasoning

Compressor summary: This study explores how well neural algorithmic reasoning generalizes to different graph distributions, finding that selecting source distributions based on Tree Mover's Distance can help.


Edge Directionality Improves Learning on Heterophilic Graphs

Emanuele Rossi, Bertrand Charpentier, Francesco Di Giovanni, Fabrizio Frasca, Stephan Günnemann, Michael M. Bronstein

https://openreview.net/forum?id=T4LRbAMWFn

Keywords: graph neural networks, directed graphs, heterophily, node classification, graphs, geometric deep learning

Compressor summary: The paper introduces Dir-GNN, a framework for deep learning on directed graphs that leverages edge directionality information to improve performance on heterophilic datasets.


Dynamic Hyper-graph Regularised Non-negative Matrix Factorisation

Nasr Ullah Khan, Luke Dickens

https://openreview.net/forum?id=SFFs9AtGSi

Keywords: Dynamic link prediction, dynamic graphs, hyper-graphs, graph regularization, non-negative matrix factorization, graph machine learning, time series analysis

Compressor summary: Dynamic hypergraph methods use recent observations and provide more accurate predictions of relationships between entities than traditional dynamic uni-graph approaches.


A Simple Latent Variable Model for Graph Learning and Inference

Manfred Jaeger, Antonio Longa, Steve Azzolin, Oliver Schulte, Andrea Passerini

https://openreview.net/forum?id=S9jem2KZVr

Keywords: Stochastic block model, graphon, latent variable model, generative models

Compressor summary: The histogram AHK model is a simple and versatile probabilistic latent variable model for graphs that can handle complex predictive inference and graph generation, and it generalizes both graphons and stochastic block models.


Sampling Networks from Modular Compression of Network Flows

Christopher Blöcker, Jelena Smiljanić, Martin Rosvall, Ingo Scholtes

https://openreview.net/forum?id=Pz5UCXAoV6

Keywords: flow community, network model, benchmark

Compressor summary: The authors propose a new method to generate networks based on dynamic processes that captures structural characteristics like degree distribution and community structure.


SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning

Julian Minder, Florian Grötschla, Joël Mathys, Roger Wattenhofer

https://openreview.net/forum?id=PRapGjDGFQ

Keywords: algorithmic reasoning, benchmark, generalisation, scalability

Compressor summary: The paragraph describes an extension to the CLRS algorithmic learning benchmark called SALSA-CLRS that focuses on scalability and sparseness, with adapted and new problems from distributed and randomized algorithms.


Geodesic Distributions Reveal How Heterophily and Bottlenecks Limit the Expressive Power of Message Passing Neural Networks

Jonathan Rubin, Sahil Loomba, Nick S. Jones

https://openreview.net/forum?id=PEVln6psEH

Keywords: message passing neural networks, expressive power, statistical graph ensembles, graph geodesic length distribution, graph bottlenecks, heterophily

Compressor summary: The paper proposes a statistical approach to analyse how heterophily and bottlenecking influence the expressiveness of MMPNs in node classification, introduces the concept of "homophilic bottlenecking", and derives bounds on it using random graphs.


KGEx: Explaining Knowledge Graph Embeddings Via Subgraph Sampling and Knowledge Distillation

Vasileios Baltatzis, Luca Costabello

https://openreview.net/forum?id=NSXXSyc2DF

Keywords: knowledge graph embeddings, explainable AI

Compressor summary: KGEx is a method for explaining link predictions in knowledge graph embeddings by training surrogate models on different subsets of the target triple's neighborhood and selecting important triples based on their impact on the prediction accuracy.


Neural Algorithmic Reasoning for Combinatorial Optimisation

Dobrik Georgiev Georgiev, Danilo Numeroso, Davide Bacciu, Pietro Lio

https://openreview.net/forum?id=N8awTT5ep7

Keywords: Neural Algorithmic Reasoning, Neural Combinatorial Optimisation, Graph Neural Networks

Compressor summary: The paper proposes a new approach to solve NP-hard problems using neural networks pre-trained on relevant algorithms, which outperforms traditional methods and non-algorithmically informed deep learning models.


A Latent Diffusion Model for Protein Structure Generation

Cong Fu, Keqiang Yan, Limei Wang, Wing Yee Au, Michael Curtis McThrow, Tao Komikado, Koji Maruhashi, Kanji Uchino, Xiaoning Qian, Shuiwang Ji

https://openreview.net/forum?id=MBZVrtbi06

Keywords: protein generation, equivariant, 3D protein autoencoder

Compressor summary: The proposed latent diffusion model simplifies protein modeling by capturing natural protein structure distributions in a condensed latent space, enabling efficient generation of new protein backbone structures for synthetic biology applications.


Accelerating Molecular Graph Neural Networks via Knowledge Distillation

Filip Ekström Kelvinius, Dimitar Georgiev, Artur Toshev, Johannes Gasteiger

https://openreview.net/forum?id=KWkzecJ4or

Keywords: GNN, graph neural networks, knowledge distillation, molecules, molecular simulations

Compressor summary: The paper explores using knowledge distillation to accelerate molecular graph neural networks without sacrificing predictive accuracy or inference speed.


United We Stand, Divided We Fall: Networks to Graph (N2G) Abstraction for Robust Graph Classification under Graph Label Corruption

Zhiwei Zhen, Yuzhou Chen, Murat Kantarcioglu, Yulia Gel, Kangkook Jee

https://openreview.net/forum?id=K5g021Ex14

Keywords: Representation Learning, Classification

Compressor summary: The paper proposes a new representation method called N2G that improves the robustness and performance of graph neural networks for graph classification tasks with noisy labels.


Parallel Algorithms Align with Neural Execution

Valerie Engelmayer, Dobrik Georgiev Georgiev, Petar Veličković

https://openreview.net/forum?id=IC6kpv87LB

Keywords: Parallel Algorithms, Neural Algorithmic Reasoning, Graph Neural Networks

Compressor summary: Parallel algorithms for neural reasoners use fewer layers and train faster with better results than sequential ones.


Generative modeling of labeled graphs under data scarcity

Sahil Manchanda, Shubham Gupta, Sayan Ranu, Srikanta J. Bedathur

https://openreview.net/forum?id=Hy9K2WiVwW

Keywords: Labeled Graph Generative modeling, Data scarcity, Meta-Learning

Compressor summary: The paper proposes a meta-learning based framework for generating graphs under data scarcity conditions, which transfers knowledge from similar auxiliary datasets and adapts to unseen graphs through self-paced fine-tuning.


Kùzu: Graph Learning Applications Need a Modern Graph DBMS

ziyi Chen, Xiyang Feng, Guodong Jin, Chang Liu, Semih Salihoglu

https://openreview.net/forum?id=Eg3MthXzeT

Keywords: graph database, graph database management system, systems for graph learning

Compressor summary: Building a graph learning application requires performing a series of data processing steps, such as extracting data from tabular sources into a graph, cleaning the graph, extracting node/edge features, moving the data into a graph learning library to generate embeddings, and possibly saving these embeddings in a software for further processing. Many of these steps can be performed in an efficient way by database management systems (DBMSs), which come with high-level data models and query languages, and functionalities to export datasets into other formats. However, no current DBMS is tailored for graph learning pipelines. We present Kùzu, an open-sourced graph DBMS that aims to fill this gap. Kùzu is an embeddable system that runs as part of users' applications, implements the property graph data model and the openCypher query language, a graph-optimized storage structures, and join algorithms. Kùzu can ingest data from several tabular raw file formats and export data to popular graph learning libraries. We present Kùzu's design goals, architecture, our ongoing work, and demonstrate how it can be used to train large GNN models that do not fit into main memory. Kùzu is available under a permissive license.


Characterizing Graph Datasets for Node Classification: Homophily-Heterophily Dichotomy and Beyond

Oleg Platonov, Denis Kuznedelev, Artem Babenko, Liudmila Prokhorenkova

https://openreview.net/forum?id=D4GLZkTphJ

Keywords: homophily, heterophily, adjusted homophily, label informativeness, constant baseline, GNN

Compressor summary: The authors propose adjusted homophily as a superior measure of node similarity in graphs, and introduce label informativeness as a new characteristic to distinguish different types of heterophily.


MUDiff: Unified Diffusion for Complete Molecule Generation

Chenqing Hua, Sitao Luan, Minkai Xu, Zhitao Ying, Jie Fu, Stefano Ermon, Doina Precup

https://openreview.net/forum?id=C7Z3yhWUAU

Keywords: Molecule Generation; Graph Neural Network

Compressor summary: The paper presents a new model that combines discrete and continuous diffusion processes to generate a comprehensive representation of molecules, including atom features, 2D structures, and 3D coordinates, and uses a novel graph transformer to denoise the process and learn invariant representations.


On the Robustness of Post-hoc GNN Explainers to Label Noise

Zhiqiang Zhong, Yangqianzi Jiang, Davide Mottin

https://openreview.net/forum?id=BWZnVy021e

Keywords: Post-hoc Graph Neural Network Explainers, Robustness, Label Noise

Compressor summary: The text discusses the limitations and susceptibility of post-hoc graph neural network explainers under label noise conditions.


HEAL: Unlocking the Potential of Learning on Hypergraphs Enriched with Attributes and Layers

Naganand Yadati, Tarun Kumar, Deepak Maurya, Balaraman Ravindran, Partha Talukdar

https://openreview.net/forum?id=BUj4BqjGC3

Keywords: Hypergraphs, Multi-layer Graphs, Feature Smoothing

Compressor summary: The paper introduces HEAL, a novel hypergraph learning framework that leverages attribute-rich and multi-layered structures to effectively model complex relationships in real-world systems.


Rank Collapse Causes Over-Smoothing and Over-Correlation in Graph Neural Networks

Andreas Roth, Thomas Liebig

https://openreview.net/forum?id=9aIDdGm7a6

Keywords: Graph-based Learning, graph neural networks, over-smoothing, over-correlation, expressivity, rank collapse

Compressor summary: The study explains why deep graph neural networks can have problems with over-smoothing and feature over-correlation, and suggests using a sum of Kronecker products to avoid these issues.


Advection Diffusion Reaction Graph Neural Networks for Spatio-Temporal Data

Moshe Eliasof, Eldad Haber, Eran Treister

https://openreview.net/forum?id=8jCpJE3ugQ

Keywords: Advection, Diffusion, Reaction, Temporal, PDE, ODE

Compressor summary: The paper introduces a new GNN model for learning from graph-structured data with advection, diffusion, and reaction processes.


Maximally Expressive GNNs for Outerplanar Graphs

Franka Bause, Fabian Jogl, Pascal Welke, Maximilian Thiessen

https://openreview.net/forum?id=7vyGCFTajk

Keywords: outerplanar graphs, message passing neural networks, expressivity, Weisfeiler-Leman, graph transformation

Compressor summary: The authors propose a fast linear-time graph transformation method to enhance expressivity of graph neural networks and distinguish pharmaceutical graphs based on their outerplanar structure.


Cycle Invariant Positional Encoding for Graph Representation Learning

Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, Chao Chen, Yusu Wang

https://openreview.net/forum?id=7BQZyQERuP

Keywords: permutation invariance, algebraic topology, hodge laplacian, computational topology

Compressor summary: The paper proposes CycleNet, a structure encoding module for graph neural networks that encodes cycle information using edge structure encoding in a permutation invariant manner and shows its effectiveness in various benchmarks.


Triplet Edge Attention for Algorithmic Reasoning

Yeonjoon Jung, Sungsoo Ahn

https://openreview.net/forum?id=6CCR9gCKGd

Keywords: graph neural network, algorithmic reasoning

Compressor summary: The paper proposes Triplet Edge Attention (TEA), a new graph neural network layer that improves learning from classical algorithms by paying attention to edges and achieving better results on CLRS benchmarks.


Semi-Supervised Learning for High-Fidelity Fluid Flow Reconstruction

Cong Fu, Jacob Helwig, Shuiwang Ji

https://openreview.net/forum?id=695IYJh1Ba

Keywords: physical simulation, fluid flow reconstruction

Compressor summary: The proposed cascaded fluid reconstruction framework combines low-resolution and high-resolution simulations to improve accuracy and efficiency in fluid dynamics analysis, using a proposal network and a ModeFormer transformer for refinement.


BeMap: Balanced Message Passing for Fair Graph Neural Network

Xiao Lin, Jian Kang, Weilin Cong, Hanghang Tong

https://openreview.net/forum?id=4RiLDrCbzW

Keywords: group fairness, graph neural network, message passing

Compressor summary: The paper studies how message passing can amplify bias in graph neural networks and proposes a method called BeMap that balances the number of 1-hop neighbors for fairness.


Recursive Algorithmic Reasoning

Jonas Jürß, Dulhan Hansaja Jayalath, Petar Veličković

https://openreview.net/forum?id=43M1bPorxU

Keywords: graph neural networks, algorithmic reasoning

Compressor summary: The paper presents methods to enhance graph neural networks with a stack and sampling techniques, enabling them to execute recursive algorithms and better generalize to out-of-distribution data.


On Performance Discrepancies Across Local Homophily Levels in Graph Neural Networks

Donald Loveland, Jiong Zhu, Mark Heimann, Benjamin Fish, Michael T Schaub, Danai Koutra

https://openreview.net/forum?id=3sPJt65hzO

Keywords: graph neural network, heterophily, discrepancy

Compressor summary: The authors study how local homophily levels affect the performance of graph neural networks (GNNs) in node classification and show that GNNs designed for heterophilous graphs can improve performance across different homophily settings.


Rethinking Higher-order Representation Learning with Graph Neural Networks

Tuo Xu, Lei Zou

https://openreview.net/forum?id=2OyoYw4InI

Keywords: graph neural networks, higher-order representaton, expressiveness

Compressor summary: The paper analyzes the expressive power of higher-order representation learning methods for graph machine learning and proposes a simple labeling trick method for link prediction tasks.


Propagate & Distill: Towards Effective Graph Learners Using Propagation-Embracing MLPs

Yong-Min Shin, Won-Yong Shin

https://openreview.net/forum?id=2A14hhZsnA

Keywords: Graph neural network; Knowledge distillation; Propagation; Multilayer perceptron

Compressor summary: The authors propose Propagate \& Distill (P\&D), a method to improve semi-supervised node classification on graphs by training a student MLP using knowledge distillation from a teacher GNN, while injecting structural information in an explicit and interpretable manner.


Geometric Epitope and Paratope Prediction

Marco Pegoraro, Clémentine Carla Juliette Dominé, Emanuele Rodolà, Petar Veličković, Andreea Deac

https://openreview.net/forum?id=22NrcBctdI

Keywords: graph learning, learning on surface, drug discovery, paratope-epitope prediciton

Compressor summary: The paper explores how using geometric information from proteins' surfaces can improve predictions of antibody-antigen binding sites.


Inferring Networks from Marginals Using Iterative Proportional Fitting

Serina Chang, Zhaonan Qu, Jure Leskovec, Johan Ugander

https://openreview.net/forum?id=1HSlaSnKhI

Keywords: network inference, dynamic graphs, iterative proportional fitting, Sinkhorn's algorithm

Compressor summary: The authors explain how they use Sinkhorn's algorithm to infer dynamic networks from 3-dimensional marginals and provide a statistical justification for its minimization principle, as well as demonstrate its effectiveness with real-world mobility data.


Interpretable Graph Networks Formulate Universal Algebra Conjectures

Francesco Giannini, Stefano Fioravanti, Oguzhan Keskin, Alisia Maria Lupidi, Lucie Charlotte Magister, Pietro Lio, Pietro Barbiero

https://openreview.net/forum?id=KhOkUnO04d

Keywords: explainable AI, universal algebra, concept-based models, graph neural networks, interpretablity

Compressor summary: The authors propose using AI and interpretable graph networks to analyze and test conjectures in Universal Algebra, a foundational field of mathematics.