SetTransformerEncoder

class dgl.nn.pytorch.glob.SetTransformerEncoder(d_model, n_heads, d_head, d_ff, n_layers=1, block_type='sab', m=None, dropouth=0.0, dropouta=0.0)[source]

基类: Module

来自 Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks 的编码器模块

参数:
  • d_model (int) – 模型的隐藏层大小。

  • n_heads (int) – 注意力头的数量。

  • d_head (int) – 每个注意力头的隐藏层大小。

  • d_ff (int) – FFN (逐位置前馈网络) 层中的核大小。

  • n_layers (int) – 层数。

  • block_type (str) – 构建块类型:‘sab’ (集合注意力块) 或 ‘isab’ (诱导集合注意力块)。

  • m (intNone) – ISAB 块中诱导向量的数量。如果块类型是 ‘sab’,则设置为 None。

  • dropouth (float) – 每个子层的 dropout 率。

  • dropouta (float) – 注意力头的 dropout 率。

示例

>>> import dgl
>>> import torch as th
>>> from dgl.nn import SetTransformerEncoder
>>>
>>> g1 = dgl.rand_graph(3, 4)  # g1 is a random graph with 3 nodes and 4 edges
>>> g1_node_feats = th.rand(3, 5)  # feature size is 5
>>> g1_node_feats
tensor([[0.8948, 0.0699, 0.9137, 0.7567, 0.3637],
        [0.8137, 0.8938, 0.8377, 0.4249, 0.6118],
        [0.5197, 0.9030, 0.6825, 0.5725, 0.4755]])
>>>
>>> g2 = dgl.rand_graph(4, 6)  # g2 is a random graph with 4 nodes and 6 edges
>>> g2_node_feats = th.rand(4, 5)  # feature size is 5
>>> g2_node_feats
tensor([[0.2053, 0.2426, 0.4111, 0.9028, 0.5658],
        [0.5278, 0.6365, 0.9990, 0.2351, 0.8945],
        [0.3134, 0.0580, 0.4349, 0.7949, 0.3891],
        [0.0142, 0.2709, 0.3330, 0.8521, 0.6925]])
>>>
>>> set_trans_enc = SetTransformerEncoder(5, 4, 4, 20)  # create a settrans encoder.

示例 1:输入单个图

>>> set_trans_enc(g1, g1_node_feats)
tensor([[ 0.1262, -1.9081,  0.7287,  0.1678,  0.8854],
        [-0.0634, -1.1996,  0.6955, -0.9230,  1.4904],
        [-0.9972, -0.7924,  0.6907, -0.5221,  1.6211]],
       grad_fn=<NativeLayerNormBackward>)

示例 2:输入批次图

构建一批 DGL 图,并将所有图的节点特征连接成一个张量。

>>> batch_g = dgl.batch([g1, g2])
>>> batch_f = th.cat([g1_node_feats, g2_node_feats])
>>>
>>> set_trans_enc(batch_g, batch_f)
tensor([[ 0.1262, -1.9081,  0.7287,  0.1678,  0.8854],
        [-0.0634, -1.1996,  0.6955, -0.9230,  1.4904],
        [-0.9972, -0.7924,  0.6907, -0.5221,  1.6211],
        [-0.7973, -1.3203,  0.0634,  0.5237,  1.5306],
        [-0.4497, -1.0920,  0.8470, -0.8030,  1.4977],
        [-0.4940, -1.6045,  0.2363,  0.4885,  1.3737],
        [-0.9840, -1.0913, -0.0099,  0.4653,  1.6199]],
       grad_fn=<NativeLayerNormBackward>)

说明

SetTransformerEncoder 不是一个 readout 层,它返回的张量是节点级别的表示,而不是图级别的表示;SetTransformerDecoder 会返回一个图 readout 张量。

forward(graph, feat)[source]

计算 Set Transformer 的编码器部分。

参数:
  • graph (DGLGraph) – 输入图。

  • feat (torch.Tensor) – 输入特征,形状为 \((N, D)\),其中 \(N\) 是图中的节点数。

返回值:

输出特征,形状为 \((N, D)\)

返回类型:

torch.Tensor