您的位置:首页 > 健康 > 养生 > 设计方案审核合格后由谁签字确认_jsnba在线直播免费观看直播_安徽网站关键词优化_百度推广关键词越多越好吗

设计方案审核合格后由谁签字确认_jsnba在线直播免费观看直播_安徽网站关键词优化_百度推广关键词越多越好吗

2025/2/25 4:48:32 来源:https://blog.csdn.net/sinat_41942180/article/details/143131375  浏览:    关键词:设计方案审核合格后由谁签字确认_jsnba在线直播免费观看直播_安徽网站关键词优化_百度推广关键词越多越好吗
设计方案审核合格后由谁签字确认_jsnba在线直播免费观看直播_安徽网站关键词优化_百度推广关键词越多越好吗

CoGNN 类实现了一个图神经网络(GNN),该模型利用 Gumbel-Softmax 技术动态调整边权重(边选择),并根据输入的节点特征和边信息进行图嵌入生成。模型的主要功能包括环境编码、节点和边的编码、边权重的创建和应用、跳跃连接(skip connection)、以及池化操作,用于生成整个图的嵌入。模型中的 Gumbel-Softmax 温度参数可以学习或固定。

import torch
from torch import Tensor
from torch_geometric.typing import Adj, OptTensor
from torch.nn import Module, Dropout, LayerNorm, Identity
import torch.nn.functional as F
from typing import Tuple
import numpy as npfrom helpers.classes import GumbelArgs, EnvArgs, ActionNetArgs, Pool, DataSetEncoders
from models.temp import TempSoftPlus
from models.action import ActionNetclass CoGNN(Module):def __init__(self, gumbel_args: GumbelArgs, env_args: EnvArgs, action_args: ActionNetArgs, pool: Pool):super(CoGNN, self).__init__()self.env_args = env_argsself.learn_temp = gumbel_args.learn_tempif gumbel_args.learn_temp:self.temp_model = TempSoftPlus(gumbel_args=gumbel_args, env_dim=env_args.env_dim)self.temp = gumbel_args.tempself.num_layers = env_args.num_layersself.env_net = env_args.load_net()self.use_encoders = env_args.dataset_encoders.use_encoders()layer_norm_cls = LayerNorm if env_args.layer_norm else Identityself.hidden_layer_norm = layer_norm_cls(env_args.env_dim)self.skip = env_args.skipself.dropout = Dropout(p=env_args.dropout)self.drop_ratio = env_args.dropoutself.act = env_args.act_type.get()self.in_act_net = ActionNet(action_args=action_args)self.out_act_net = ActionNet(action_args=action_args)# Encoder typesself.dataset_encoder = env_args.dataset_encodersself.env_bond_encoder = self.dataset_encoder.edge_encoder(emb_dim=env_args.env_dim, model_type=env_args.model_type)self.act_bond_encoder = self.dataset_encoder.edge_encoder(emb_dim=action_args.hidden_dim, model_type=action_args.model_type)# Pooling function to generate whole-graph embeddingsself.pooling = pool.get()def forward(self, x: Tensor, edge_index: Adj, pestat, edge_attr: OptTensor = None, batch: OptTensor = None,edge_ratio_node_mask: OptTensor = None) -> Tuple[Tensor, Tensor]:result = 0calc_stats = edge_ratio_node_mask is not Noneif calc_stats:edge_ratio_edge_mask = edge_ratio_node_mask[edge_index[0]] & edge_ratio_node_mask[edge_index[1]]edge_ratio_list = []# bond encodeif edge_attr is None or self.env_bond_encoder is None:env_edge_embedding = Noneelse:env_edge_embedding = self.env_bond_encoder(edge_attr)if edge_attr is None or self.act_bond_encoder is None:act_edge_embedding = Noneelse:act_edge_embedding = self.act_bond_encoder(edge_attr)# node encode  x = self.env_net[0](x, pestat)  # (N, F) encoderif not self.use_encoders:x = self.dropout(x)x = self.act(x)for gnn_idx in range(self.num_layers):x = self.hidden_layer_norm(x)# actionin_logits = self.in_act_net(x=x, edge_index=edge_index, env_edge_attr=env_edge_embedding,act_edge_attr=act_edge_embedding)  # (N, 2)out_logits = self.out_act_net(x=x, edge_index=edge_index, env_edge_attr=env_edge_embedding,act_edge_attr=act_edge_embedding)  # (N, 2)t

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com