Encoders

Those are neural network encoders. They can be GNNs (tgn, graph_attention, etc.) in the case where the graph structure is leveraged, but can also be a simple linear layer like the one used in Velox (none) or a more complex custom MLP (custom_mlp). The job of encoders is to compute the node and edge embeddings given the next step to the decoder and objective to compute the loss.

  • tgn
    • tgn_memory_dim: int
    • tgn_time_dim: int
    • use_node_feats_in_gnn: bool
    • use_memory: bool
    • use_time_order_encoding: bool
    • project_src_dst: bool
  • graph_attention
    • activation: str
    • num_heads: int
    • concat: bool
    • flow: str
    • num_layers: int
  • sage
    • activation: str
    • num_layers: int
  • gat
    • activation: str
    • num_heads: int
    • concat: bool
    • flow: str
    • num_layers: int
  • gin
    • activation: str
    • num_layers: int
  • sum_aggregation
  • rcaid_gat
  • magic_gat
    • num_layers: int
    • num_heads: int
    • negative_slope: float
    • alpha_l: float
    • activation: str
  • glstm
  • custom_mlp
    • architecture_str: str
  • none