From dd001a484975bd2548441733cc0ce5805c12ffe3 Mon Sep 17 00:00:00 2001 From: Oleksandr Ferludin Date: Wed, 6 Dec 2023 10:33:06 -0800 Subject: [PATCH] Proposal for updated API Docs PiperOrigin-RevId: 588473240 --- .../python/models/contrastive_losses.md | 69 + .../contrastive_losses/AllSvdMetrics.md | 202 +++ .../contrastive_losses/BarlowTwinsTask.md | 168 +++ .../contrastive_losses/ContrastiveLossTask.md | 186 +++ .../contrastive_losses/CorruptionSpec.md | 87 ++ .../models/contrastive_losses/Corruptor.md | 58 + .../DeepGraphInfomaxLogits.md | 17 + .../DeepGraphInfomaxTask.md | 165 +++ .../contrastive_losses/DropoutFeatures.md | 56 + .../ShuffleFeaturesGlobally.md | 60 + .../TripletEmbeddingSquaredDistances.md | 17 + .../contrastive_losses/TripletLossTask.md | 200 +++ .../models/contrastive_losses/VicRegTask.md | 169 +++ .../models/contrastive_losses/all_symbols.md | 24 + .../models/contrastive_losses/coherence.md | 69 + .../contrastive_losses/numerical_rank.md | 70 + .../pseudo_condition_number.md | 69 + .../models/contrastive_losses/rankme.md | 77 ++ .../contrastive_losses/self_clustering.md | 63 + .../docs/api_docs/python/models/gat_v2.md | 13 +- .../python/models/gat_v2/GATv2Conv.md | 160 +-- .../python/models/gat_v2/GATv2EdgePool.md | 34 +- .../models/gat_v2/GATv2HomGraphUpdate.md | 36 +- .../models/gat_v2/GATv2MPNNGraphUpdate.md | 65 +- .../gat_v2/graph_update_from_config_dict.md | 45 +- .../gat_v2/graph_update_get_config_dict.md | 13 +- .../docs/api_docs/python/models/gcn.md | 13 +- .../api_docs/python/models/gcn/GCNConv.md | 60 +- .../python/models/gcn/GCNHomGraphUpdate.md | 36 +- .../docs/api_docs/python/models/graph_sage.md | 13 +- .../graph_sage/GCNGraphSAGENodeSetUpdate.md | 47 +- .../graph_sage/GraphSAGEAggregatorConv.md | 52 +- .../models/graph_sage/GraphSAGEGraphUpdate.md | 54 +- .../models/graph_sage/GraphSAGENextState.md | 36 +- .../models/graph_sage/GraphSAGEPoolingConv.md | 60 +- .../docs/api_docs/python/models/mt_albis.md | 13 +- .../models/mt_albis/MtAlbisGraphUpdate.md | 108 +- .../mt_albis/graph_update_from_config_dict.md | 42 +- .../mt_albis/graph_update_get_config_dict.md | 13 +- .../python/models/multi_head_attention.md | 13 +- .../MultiHeadAttentionConv.md | 266 ++-- .../MultiHeadAttentionEdgePool.md | 36 +- .../MultiHeadAttentionHomGraphUpdate.md | 34 +- .../MultiHeadAttentionMPNNGraphUpdate.md | 65 +- .../graph_update_from_config_dict.md | 49 +- .../graph_update_get_config_dict.md | 13 +- .../api_docs/python/models/vanilla_mpnn.md | 13 +- .../vanilla_mpnn/VanillaMPNNGraphUpdate.md | 57 +- .../graph_update_from_config_dict.md | 47 +- .../graph_update_get_config_dict.md | 13 +- tensorflow_gnn/docs/api_docs/python/runner.md | 173 +++ .../api_docs/python/runner/ContextLabelFn.md | 17 + .../api_docs/python/runner/DatasetProvider.md | 27 + .../python/runner/DotProductLinkPrediction.md | 200 +++ .../python/runner/FitOrSkipPadding.md | 49 + .../runner/GraphBinaryClassification.md | 222 +++ .../python/runner/GraphMeanAbsoluteError.md | 195 +++ .../GraphMeanAbsolutePercentageError.md | 195 +++ .../python/runner/GraphMeanSquaredError.md | 195 +++ .../runner/GraphMeanSquaredLogScaledError.md | 155 +++ .../GraphMeanSquaredLogarithmicError.md | 195 +++ .../runner/GraphMulticlassClassification.md | 239 ++++ .../python/runner/GraphTensorPadding.md | 37 + .../python/runner/GraphTensorProcessorFn.md | 26 + .../runner/HadamardProductLinkPrediction.md | 203 +++ .../runner/IntegratedGradientsExporter.md | 127 ++ .../python/runner/KerasModelExporter.md | 121 ++ .../api_docs/python/runner/KerasTrainer.md | 263 ++++ .../runner/KerasTrainerCheckpointOptions.md | 108 ++ .../python/runner/KerasTrainerOptions.md | 90 ++ .../docs/api_docs/python/runner/Loss.md | 17 + .../docs/api_docs/python/runner/Losses.md | 16 + .../docs/api_docs/python/runner/Metrics.md | 17 + .../api_docs/python/runner/ModelExporter.md | 53 + .../python/runner/NodeBinaryClassification.md | 237 ++++ .../runner/NodeMulticlassClassification.md | 254 ++++ .../python/runner/ParameterServerStrategy.md | 1082 +++++++++++++++ .../python/runner/PassthruDatasetProvider.md | 40 + .../runner/PassthruSampleDatasetsProvider.md | 46 + .../api_docs/python/runner/Predictions.md | 17 + .../runner/RootNodeBinaryClassification.md | 216 +++ .../api_docs/python/runner/RootNodeLabelFn.md | 20 + .../runner/RootNodeMeanAbsoluteError.md | 194 +++ .../RootNodeMeanAbsoluteLogarithmicError.md | 152 ++ .../RootNodeMeanAbsolutePercentageError.md | 194 +++ .../python/runner/RootNodeMeanSquaredError.md | 194 +++ .../RootNodeMeanSquaredLogScaledError.md | 155 +++ .../RootNodeMeanSquaredLogarithmicError.md | 194 +++ .../RootNodeMulticlassClassification.md | 233 ++++ .../docs/api_docs/python/runner/RunResult.md | 71 + .../runner/SampleTFRecordDatasetsProvider.md | 229 +++ .../python/runner/SimpleDatasetProvider.md | 97 ++ .../runner/SimpleSampleDatasetsProvider.md | 238 ++++ .../python/runner/SubmoduleExporter.md | 125 ++ .../python/runner/TFDataServiceConfig.md | 66 + .../python/runner/TFRecordDatasetProvider.md | 93 ++ .../api_docs/python/runner/TPUStrategy.md | 1228 +++++++++++++++++ .../docs/api_docs/python/runner/Task.md | 179 +++ .../api_docs/python/runner/TightPadding.md | 47 + .../docs/api_docs/python/runner/Trainer.md | 100 ++ .../api_docs/python/runner/all_symbols.md | 62 + .../api_docs/python/runner/export_model.md | 77 ++ .../python/runner/incrementing_model_dir.md | 52 + .../python/runner/integrated_gradients.md | 95 ++ .../python/runner/one_node_per_component.md | 17 + .../docs/api_docs/python/runner/run.md | 275 ++++ tensorflow_gnn/docs/api_docs/python/tfgnn.md | 234 ++-- .../docs/api_docs/python/tfgnn/Adjacency.md | 174 +-- .../api_docs/python/tfgnn/AdjacencySpec.md | 218 +-- .../docs/api_docs/python/tfgnn/Context.md | 192 +-- .../docs/api_docs/python/tfgnn/ContextSpec.md | 189 +-- .../docs/api_docs/python/tfgnn/EdgeSet.md | 175 ++- .../docs/api_docs/python/tfgnn/EdgeSetSpec.md | 197 +-- .../docs/api_docs/python/tfgnn/Feature.md | 75 - .../python/tfgnn/FeatureDefaultValues.md | 32 +- .../docs/api_docs/python/tfgnn/Field.md | 10 +- .../api_docs/python/tfgnn/FieldOrFields.md | 10 +- .../docs/api_docs/python/tfgnn/FieldSpec.md | 10 +- .../docs/api_docs/python/tfgnn/Fields.md | 10 +- .../docs/api_docs/python/tfgnn/FieldsSpec.md | 10 +- .../docs/api_docs/python/tfgnn/GraphSchema.md | 90 -- .../docs/api_docs/python/tfgnn/GraphTensor.md | 289 ++-- .../api_docs/python/tfgnn/GraphTensorSpec.md | 192 +-- .../api_docs/python/tfgnn/HyperAdjacency.md | 168 +-- .../python/tfgnn/HyperAdjacencySpec.md | 200 +-- .../python/tfgnn/IncidentNodeOrContextTag.md | 10 +- .../docs/api_docs/python/tfgnn/NodeSet.md | 170 ++- .../docs/api_docs/python/tfgnn/NodeSetSpec.md | 193 +-- .../api_docs/python/tfgnn/SizeConstraints.md | 36 +- .../api_docs/python/tfgnn/ValidationError.md | 16 +- .../tfgnn/add_readout_from_first_node.md | 23 +- .../api_docs/python/tfgnn/add_self_loops.md | 49 +- .../docs/api_docs/python/tfgnn/all_symbols.md | 21 +- .../python/tfgnn/assert_constraints.md | 26 +- .../assert_satisfies_size_constraints.md | 38 +- .../docs/api_docs/python/tfgnn/broadcast.md | 34 +- .../tfgnn/broadcast_context_to_edges.md | 34 +- .../tfgnn/broadcast_context_to_nodes.md | 34 +- .../python/tfgnn/broadcast_node_to_edges.md | 44 +- .../tfgnn/check_compatible_with_schema_pb.md | 25 +- .../tfgnn/check_homogeneous_graph_tensor.md | 14 +- .../python/tfgnn/check_required_features.md | 48 +- .../python/tfgnn/check_scalar_graph_tensor.md | 15 +- .../api_docs/python/tfgnn/combine_values.md | 41 +- .../python/tfgnn/convert_to_line_graph.md | 31 +- .../tfgnn/create_graph_spec_from_schema_pb.md | 39 +- .../tfgnn/create_schema_pb_from_graph_spec.md | 36 +- .../tfgnn/dataset_filter_with_summary.md | 36 +- .../python/tfgnn/dataset_from_generator.md | 22 +- .../tfgnn/disable_graph_tensor_validation.md | 19 + ...able_graph_tensor_validation_at_runtime.md | 15 + .../tfgnn/enable_graph_tensor_validation.md | 15 + ...able_graph_tensor_validation_at_runtime.md | 15 + .../api_docs/python/tfgnn/experimental.md | 13 +- .../tfgnn/find_tight_size_constraints.md | 42 +- .../python/tfgnn/gather_first_node.md | 44 +- .../python/tfgnn/get_aux_type_prefix.md | 19 +- .../get_homogeneous_node_and_edge_set_name.md | 25 +- .../docs/api_docs/python/tfgnn/get_io_spec.md | 35 +- .../get_registered_reduce_operation_names.md | 13 +- .../python/tfgnn/graph_tensor_to_values.md | 24 +- .../docs/api_docs/python/tfgnn/homogeneous.md | 36 +- .../api_docs/python/tfgnn/is_dense_tensor.md | 13 +- .../api_docs/python/tfgnn/is_graph_tensor.md | 15 +- .../api_docs/python/tfgnn/is_ragged_tensor.md | 13 +- .../api_docs/python/tfgnn/iter_features.md | 36 +- .../docs/api_docs/python/tfgnn/iter_sets.md | 36 +- .../docs/api_docs/python/tfgnn/keras.md | 20 +- .../python/tfgnn/keras/ConvGNNBuilder.md | 56 +- .../python/tfgnn/keras/clone_initializer.md | 23 +- .../api_docs/python/tfgnn/keras/layers.md | 59 +- .../keras/layers/AddReadoutFromFirstNode.md | 25 +- .../python/tfgnn/keras/layers/AddSelfLoops.md | 13 +- .../keras/layers/AnyToAnyConvolutionBase.md | 228 ++- .../python/tfgnn/keras/layers/Broadcast.md | 46 +- .../tfgnn/keras/layers/ContextUpdate.md | 59 +- .../tfgnn/keras/layers/EdgeSetUpdate.md | 40 +- .../python/tfgnn/keras/layers/GraphUpdate.md | 44 +- .../python/tfgnn/keras/layers/ItemDropout.md | 39 +- .../tfgnn/keras/layers/MakeEmptyFeature.md | 28 +- .../python/tfgnn/keras/layers/MapFeatures.md | 106 +- .../tfgnn/keras/layers/NextStateFromConcat.md | 22 +- .../tfgnn/keras/layers/NodeSetUpdate.md | 54 +- .../tfgnn/keras/layers/PadToTotalSizes.md | 30 +- .../python/tfgnn/keras/layers/ParseExample.md | 18 +- .../tfgnn/keras/layers/ParseSingleExample.md | 18 +- .../python/tfgnn/keras/layers/Pool.md | 59 +- .../python/tfgnn/keras/layers/Readout.md | 64 +- .../tfgnn/keras/layers/ReadoutFirstNode.md | 54 +- .../keras/layers/ReadoutNamedIntoFeature.md | 57 +- .../tfgnn/keras/layers/ResidualNextState.md | 26 +- .../python/tfgnn/keras/layers/SimpleConv.md | 148 +- .../keras/layers/SingleInputNextState.md | 17 +- .../tfgnn/keras/layers/StructuredReadout.md | 31 +- .../learn_fit_or_skip_size_constraints.md | 58 +- .../docs/api_docs/python/tfgnn/mask_edges.md | 27 +- .../docs/api_docs/python/tfgnn/node_degree.md | 30 +- .../python/tfgnn/pad_to_total_sizes.md | 66 +- .../api_docs/python/tfgnn/parse_example.md | 38 +- .../api_docs/python/tfgnn/parse_schema.md | 26 +- .../python/tfgnn/parse_single_example.md | 35 +- .../docs/api_docs/python/tfgnn/pool.md | 46 +- .../python/tfgnn/pool_edges_to_context.md | 40 +- .../python/tfgnn/pool_edges_to_node.md | 46 +- .../python/tfgnn/pool_neighbors_to_node.md | 110 ++ .../tfgnn/pool_neighbors_to_node_feature.md | 105 ++ .../python/tfgnn/pool_nodes_to_context.md | 40 +- .../docs/api_docs/python/tfgnn/proto.md | 68 + .../api_docs/python/tfgnn/proto/BigQuery.md | 91 ++ .../python/tfgnn/proto/BigQuery/TableSpec.md | 40 + .../api_docs/python/tfgnn/proto/Context.md | 36 + .../api_docs/python/tfgnn/proto/EdgeSet.md | 64 + .../api_docs/python/tfgnn/proto/Feature.md | 58 + .../python/tfgnn/proto/GraphSchema.md | 58 + .../api_docs/python/tfgnn/proto/Metadata.md | 54 + .../python/tfgnn/proto/Metadata/KeyValue.md | 33 + .../api_docs/python/tfgnn/proto/NodeSet.md | 50 + .../api_docs/python/tfgnn/proto/OriginInfo.md | 36 + .../python/tfgnn/random_graph_tensor.md | 57 +- .../docs/api_docs/python/tfgnn/read_schema.md | 28 +- .../api_docs/python/tfgnn/reorder_nodes.md | 47 +- .../docs/api_docs/python/tfgnn/reverse_tag.md | 15 +- .../docs/api_docs/python/tfgnn/sampler.md | 29 +- .../python/tfgnn/sampler/SamplingOp.md | 33 +- .../python/tfgnn/sampler/SamplingSpec.md | 28 +- .../tfgnn/sampler/SamplingSpecBuilder.md | 27 +- .../tfgnn/sampler/make_sampling_spec_tree.md | 35 +- .../tfgnn/satisfies_size_constraints.md | 28 +- .../python/tfgnn/shuffle_features_globally.md | 29 +- .../api_docs/python/tfgnn/shuffle_nodes.md | 30 +- .../docs/api_docs/python/tfgnn/softmax.md | 41 +- .../python/tfgnn/softmax_edges_per_node.md | 13 +- .../python/tfgnn/structured_readout.md | 33 +- .../tfgnn/structured_readout_into_feature.md | 51 +- .../validate_graph_tensor_for_readout.md | 29 +- .../validate_graph_tensor_spec_for_readout.md | 25 +- .../api_docs/python/tfgnn/validate_schema.md | 34 +- .../api_docs/python/tfgnn/write_example.md | 36 +- .../api_docs/python/tfgnn/write_schema.md | 28 +- .../models/multi_head_attention/layers.py | 28 +- 240 files changed, 15611 insertions(+), 4234 deletions(-) create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/AllSvdMetrics.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/BarlowTwinsTask.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ContrastiveLossTask.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/CorruptionSpec.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/Corruptor.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxLogits.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxTask.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DropoutFeatures.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ShuffleFeaturesGlobally.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletEmbeddingSquaredDistances.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletLossTask.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/VicRegTask.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/all_symbols.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/coherence.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/numerical_rank.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/pseudo_condition_number.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/rankme.md create mode 100644 tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/self_clustering.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/ContextLabelFn.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/DatasetProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/DotProductLinkPrediction.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/FitOrSkipPadding.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphBinaryClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMeanAbsoluteError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMeanAbsolutePercentageError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMeanSquaredError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMeanSquaredLogScaledError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMeanSquaredLogarithmicError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphMulticlassClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphTensorPadding.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/GraphTensorProcessorFn.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/HadamardProductLinkPrediction.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/IntegratedGradientsExporter.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/KerasModelExporter.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/KerasTrainer.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/KerasTrainerCheckpointOptions.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/KerasTrainerOptions.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Loss.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Losses.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Metrics.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/ModelExporter.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/NodeBinaryClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/NodeMulticlassClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/ParameterServerStrategy.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/PassthruDatasetProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/PassthruSampleDatasetsProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Predictions.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeBinaryClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeLabelFn.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanAbsoluteError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanAbsoluteLogarithmicError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanAbsolutePercentageError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanSquaredError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanSquaredLogScaledError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMeanSquaredLogarithmicError.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RootNodeMulticlassClassification.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/RunResult.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/SampleTFRecordDatasetsProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/SimpleDatasetProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/SimpleSampleDatasetsProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/SubmoduleExporter.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/TFDataServiceConfig.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/TFRecordDatasetProvider.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/TPUStrategy.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Task.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/TightPadding.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/Trainer.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/all_symbols.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/export_model.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/incrementing_model_dir.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/integrated_gradients.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/one_node_per_component.md create mode 100644 tensorflow_gnn/docs/api_docs/python/runner/run.md delete mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/Feature.md delete mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/GraphSchema.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/disable_graph_tensor_validation.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/disable_graph_tensor_validation_at_runtime.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/enable_graph_tensor_validation.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/enable_graph_tensor_validation_at_runtime.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/pool_neighbors_to_node.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/pool_neighbors_to_node_feature.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/BigQuery.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/BigQuery/TableSpec.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/Context.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/EdgeSet.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/Feature.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/GraphSchema.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/Metadata.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/Metadata/KeyValue.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/NodeSet.md create mode 100644 tensorflow_gnn/docs/api_docs/python/tfgnn/proto/OriginInfo.md diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses.md new file mode 100644 index 00000000..0eb3460d --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses.md @@ -0,0 +1,69 @@ +# Module: contrastive_losses + + + + + View source +on GitHub + +Contrastive losses. + +Users of TF-GNN can use these layers by importing them next to the core library: + +```python +import tensorflow_gnn as tfgnn +from tensorflow_gnn.models import contrastive_losses +``` + +## Classes + +[`class AllSvdMetrics`](./contrastive_losses/AllSvdMetrics.md): Computes +multiple metrics for representations using one SVD call. + +[`class BarlowTwinsTask`](./contrastive_losses/BarlowTwinsTask.md): A Barlow +Twins (BT) Task. + +[`class ContrastiveLossTask`](./contrastive_losses/ContrastiveLossTask.md): Base +class for unsupervised contrastive representation learning tasks. + +[`class CorruptionSpec`](./contrastive_losses/CorruptionSpec.md): Class for +defining corruption specification. + +[`class Corruptor`](./contrastive_losses/Corruptor.md): Base class for graph +corruptor. + +[`class DeepGraphInfomaxLogits`](./contrastive_losses/DeepGraphInfomaxLogits.md): +Computes clean and corrupted logits for Deep Graph Infomax (DGI). + +[`class DeepGraphInfomaxTask`](./contrastive_losses/DeepGraphInfomaxTask.md): A +Deep Graph Infomax (DGI) Task. + +[`class DropoutFeatures`](./contrastive_losses/DropoutFeatures.md): Base class +for graph corruptor. + +[`class ShuffleFeaturesGlobally`](./contrastive_losses/ShuffleFeaturesGlobally.md): +A corruptor that shuffles features. + +[`class TripletEmbeddingSquaredDistances`](./contrastive_losses/TripletEmbeddingSquaredDistances.md): +Computes embeddings distance between positive and negative pairs. + +[`class TripletLossTask`](./contrastive_losses/TripletLossTask.md): The triplet +loss task. + +[`class VicRegTask`](./contrastive_losses/VicRegTask.md): A VICReg Task. + +## Functions + +[`coherence(...)`](./contrastive_losses/coherence.md): Coherence metric +implementation. + +[`numerical_rank(...)`](./contrastive_losses/numerical_rank.md): Numerical rank +implementation. + +[`pseudo_condition_number(...)`](./contrastive_losses/pseudo_condition_number.md): +Pseudo-condition number metric implementation. + +[`rankme(...)`](./contrastive_losses/rankme.md): RankMe metric implementation. + +[`self_clustering(...)`](./contrastive_losses/self_clustering.md): +Self-clustering metric implementation. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/AllSvdMetrics.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/AllSvdMetrics.md new file mode 100644 index 00000000..5d7964d3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/AllSvdMetrics.md @@ -0,0 +1,202 @@ +# contrastive_losses.AllSvdMetrics + + + + + View source +on GitHub + +Computes multiple metrics for representations using one SVD call. + + + + + +Refer to https://arxiv.org/abs/2305.16562 for more details. + + + + + + + + + + + + + + + + + +
+fns + +a mapping from a metric name to a Callable that accepts +representations as well as the result of their SVD decomposition. +Currently only singular values are passed. +
+y_pred_transform_fn + +a function to extract clean representations +from model predictions. By default, no transformation is applied. +
+name + +Name for the metric class, used for Keras bookkeeping. +
+ +## Methods + +

merge_state

+ + + +Merges the state from one or more metrics. + +This method can be used by distributed systems to merge the state computed by +different metric instances. Typically the state will be stored in the form of +the metric's weights. For example, a tf.keras.metrics.Mean metric contains a +list of two weight values: a total and a count. If there were two instances of a +tf.keras.metrics.Accuracy that each independently aggregated partial state for +an overall accuracy calculation, these two metric's states could be combined as +follows: + +``` +>>> m1 = tf.keras.metrics.Accuracy() +>>> _ = m1.update_state([[1], [2]], [[0], [2]]) +``` + +``` +>>> m2 = tf.keras.metrics.Accuracy() +>>> _ = m2.update_state([[3], [4]], [[3], [4]]) +``` + +``` +>>> m2.merge_state([m1]) +>>> m2.result().numpy() +0.75 +``` + + + + + + + + + + + +
Args
+metrics + +an iterable of metrics. The metrics must have compatible +state. +
+ + + + + + + + + + + +
Raises
+ValueError + +If the provided iterable does not contain metrics matching +the metric's required specifications. +
+ +

reset_state

+ +View +source + + + +Resets all of the metric state variables. + +This function is called between epochs/steps, when a metric is evaluated during +training. + +

result

+ +View +source + + + +Computes and returns the scalar metric value tensor or a dict of scalars. + +Result computation is an idempotent operation that simply calculates the metric +value using the state variables. + + + + + + + + + + +
Returns
+A scalar tensor, or a dictionary of scalar tensors. +
+ +

update_state

+ +View +source + + + +Accumulates statistics for the metric. + +Note: This function is executed as a graph function in graph mode. This means: +a) Operations on the same resource are executed in textual order. This should +make it easier to do things like add the updated value of a variable to another, +for example. b) You don't need to worry about collecting the update ops to +execute. All update ops added to the graph by this function will be executed. As +a result, code should generally work the same way with graph or eager execution. + + + + + + + + + + + + +
Args
*args + +
+**kwargs + +A mini-batch of inputs to the Metric. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/BarlowTwinsTask.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/BarlowTwinsTask.md new file mode 100644 index 00000000..8ef94ef3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/BarlowTwinsTask.md @@ -0,0 +1,168 @@ +# contrastive_losses.BarlowTwinsTask + + + + + View source +on GitHub + +A Barlow Twins (BT) Task. + +Inherits From: +[`ContrastiveLossTask`](../contrastive_losses/ContrastiveLossTask.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+node_set_name + +Name of the node set for readout. +
+feature_name + +Feature name for readout. +
+representations_layer_name + +Layer name for uncorrupted representations. +
+corruptor + +Corruptor instance for creating negative samples. If not +specified, we use ShuffleFeaturesGlobally by default. +
+projector_units + +Sequence of layer sizes for projector network. +Projectors prevent dimensional collapse, but can hinder training for +easy corruptions. For more details, see +https://arxiv.org/abs/2304.12210. +
+seed + +Random seed for the default corruptor (ShuffleFeaturesGlobally). +
+ +## Methods + +

losses

+ +View +source + + + +Returns arbitrary task specific losses. + +

make_contrastive_layer

+ +View +source + + + +Returns the layer contrasting clean outputs with the correupted ones. + +

metrics

+ +View +source + + + +Returns arbitrary task specific metrics. + +

predict

+ +View +source + + + +Apply a readout head for use with various contrastive losses. + + + + + + + + + + + +
Args
+*args + +A tuple of (clean, corrupted) tfgnn.GraphTensors. +
+ + + + + + + + + + +
Returns
+The logits for some contrastive loss as produced by the implementing +subclass. +
+ +

preprocess

+ +View +source + + + +Applies a `Corruptor` and returns empty pseudo-labels. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ContrastiveLossTask.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ContrastiveLossTask.md new file mode 100644 index 00000000..f25e9ec3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ContrastiveLossTask.md @@ -0,0 +1,186 @@ +# contrastive_losses.ContrastiveLossTask + + + + + View source +on GitHub + +Base class for unsupervised contrastive representation learning tasks. + + + + + +The process is separated into preprocessing and contrastive parts, with the +focus on reusability of individual components. The `preprocess` produces input +GraphTensors to be used with the `predict` as well as labels for the task. The +default `predict` method implementation expects a pair of positive and negative +GraphTensors. There are multiple ways proposed in the literature to learn +representations based on the activations - we achieve that by using custom +losses. + +Any subclass must implement `make_contrastive_layer` method, which produces the +final prediction outputs. + +If the loss involves labels for each example, subclasses should leverage +`losses` and `metrics` methods to specify task's losses. When the loss only +involves model outputs, `make_contrastive_layer` should output both positive and +perturb examples, and the `losses` should use pseudolabels. + +Any model-specific preprocessing should be implemented in the `preprocess`. + + + + + + + + + + + + + + + + + + + + + + + + + + +
+node_set_name + +Name of the node set for readout. +
+feature_name + +Feature name for readout. +
+representations_layer_name + +Layer name for uncorrupted representations. +
+corruptor + +Corruptor instance for creating negative samples. If not +specified, we use ShuffleFeaturesGlobally by default. +
+projector_units + +Sequence of layer sizes for projector network. +Projectors prevent dimensional collapse, but can hinder training for +easy corruptions. For more details, see +https://arxiv.org/abs/2304.12210. +
+seed + +Random seed for the default corruptor (ShuffleFeaturesGlobally). +
+ +## Methods + +

losses

+ + + +Returns arbitrary task specific losses. + +

make_contrastive_layer

+ +View +source + + + +Returns the layer contrasting clean outputs with the correupted ones. + +

metrics

+ +View +source + + + +Returns arbitrary task specific metrics. + +

predict

+ +View +source + + + +Apply a readout head for use with various contrastive losses. + + + + + + + + + + + +
Args
+*args + +A tuple of (clean, corrupted) tfgnn.GraphTensors. +
+ + + + + + + + + + +
Returns
+The logits for some contrastive loss as produced by the implementing +subclass. +
+ +

preprocess

+ +View +source + + + +Applies a `Corruptor` and returns empty pseudo-labels. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/CorruptionSpec.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/CorruptionSpec.md new file mode 100644 index 00000000..ead30912 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/CorruptionSpec.md @@ -0,0 +1,87 @@ +# contrastive_losses.CorruptionSpec + + + + + View source +on GitHub + +Class for defining corruption specification. + + + + + +This has three fields for specifying the corruption behavior of node-, edge-, +and context sets. + +A value of the key "*" is a wildcard value that is used for either all features +or all node/edge sets. + +#### Some example usages: + +Want: corrupt everything with parameter 1.0. Solution: either set default to 1.0 +or set all corruption specs to `{"*": 1.}`. + +Want: corrupt all context features with parameter 1.0 except for "feat", which +should not be corrupted. Solution: set `context_corruption` to `{"feat": 0., +"*": 1.}` + + + + + + + + + + + + + + + + + +
+node_set_corruption + +Dataclass field +
+edge_set_corruption + +Dataclass field +
+context_corruption + +Dataclass field +
+ +## Methods + +

with_default

+ +View +source + + + +

__eq__

+ + + +Return self==value. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/Corruptor.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/Corruptor.md new file mode 100644 index 00000000..89a71bb3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/Corruptor.md @@ -0,0 +1,58 @@ +# contrastive_losses.Corruptor + + + + + View source +on GitHub + +Base class for graph corruptor. + + + + + + + + + + + + + + + + + + + + + + + +
+corruption_spec + +A spec for corruption application. +
+corruption_fn + +Corruption function. +
+default + +Global application default of the corruptor. This is only used +when corruption_spec is None. +
+**kwargs + +Additional keyword arguments. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxLogits.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxLogits.md new file mode 100644 index 00000000..60a512a3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxLogits.md @@ -0,0 +1,17 @@ +# contrastive_losses.DeepGraphInfomaxLogits + + + + + View source +on GitHub + +Computes clean and corrupted logits for Deep Graph Infomax (DGI). + + + + diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxTask.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxTask.md new file mode 100644 index 00000000..1b35ea21 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DeepGraphInfomaxTask.md @@ -0,0 +1,165 @@ +# contrastive_losses.DeepGraphInfomaxTask + + + + + View source +on GitHub + +A Deep Graph Infomax (DGI) Task. + +Inherits From: +[`ContrastiveLossTask`](../contrastive_losses/ContrastiveLossTask.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+node_set_name + +Name of the node set for readout. +
+feature_name + +Feature name for readout. +
+representations_layer_name + +Layer name for uncorrupted representations. +
+corruptor + +Corruptor instance for creating negative samples. If not +specified, we use ShuffleFeaturesGlobally by default. +
+projector_units + +Sequence of layer sizes for projector network. +Projectors prevent dimensional collapse, but can hinder training for +easy corruptions. For more details, see +https://arxiv.org/abs/2304.12210. +
+seed + +Random seed for the default corruptor (ShuffleFeaturesGlobally). +
+ +## Methods + +

losses

+ +View +source + + + +Returns arbitrary task specific losses. + +

make_contrastive_layer

+ +View +source + + + +Returns the layer contrasting clean outputs with the correupted ones. + +

metrics

+ +View +source + + + +Returns arbitrary task specific metrics. + +

predict

+ +View +source + + + +Apply a readout head for use with various contrastive losses. + + + + + + + + + + + +
Args
+*args + +A tuple of (clean, corrupted) tfgnn.GraphTensors. +
+ + + + + + + + + + +
Returns
+The logits for some contrastive loss as produced by the implementing +subclass. +
+ +

preprocess

+ +View +source + + + +Creates labels--i.e., (positive, negative)--for Deep Graph Infomax. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DropoutFeatures.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DropoutFeatures.md new file mode 100644 index 00000000..161ecffb --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/DropoutFeatures.md @@ -0,0 +1,56 @@ +# contrastive_losses.DropoutFeatures + + + + + View source +on GitHub + +Base class for graph corruptor. + +Inherits From: [`Corruptor`](../contrastive_losses/Corruptor.md) + + + + + + + + + + + + + + + + + + + + + + + +
+corruption_spec + +A spec for corruption application. +
+corruption_fn + +Corruption function. +
+default + +Global application default of the corruptor. This is only used +when corruption_spec is None. +
+**kwargs + +Additional keyword arguments. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ShuffleFeaturesGlobally.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ShuffleFeaturesGlobally.md new file mode 100644 index 00000000..88c73fa3 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/ShuffleFeaturesGlobally.md @@ -0,0 +1,60 @@ +# contrastive_losses.ShuffleFeaturesGlobally + + + + + View source +on GitHub + +A corruptor that shuffles features. + +Inherits From: [`Corruptor`](../contrastive_losses/Corruptor.md) + + + + + +NOTE: this function does not currently support TPUs. Consider using other +corruptor functions if executing on TPUs. See b/269249455 for reference. + + + + + + + + + + + + + + + + + + + + +
+corruption_spec + +A spec for corruption application. +
+corruption_fn + +Corruption function. +
+default + +Global application default of the corruptor. This is only used +when corruption_spec is None. +
+**kwargs + +Additional keyword arguments. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletEmbeddingSquaredDistances.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletEmbeddingSquaredDistances.md new file mode 100644 index 00000000..7af68145 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletEmbeddingSquaredDistances.md @@ -0,0 +1,17 @@ +# contrastive_losses.TripletEmbeddingSquaredDistances + + + + + View source +on GitHub + +Computes embeddings distance between positive and negative pairs. + + + + diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletLossTask.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletLossTask.md new file mode 100644 index 00000000..82e5af85 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/TripletLossTask.md @@ -0,0 +1,200 @@ +# contrastive_losses.TripletLossTask + + + + + View source +on GitHub + +The triplet loss task. + +Inherits From: +[`ContrastiveLossTask`](../contrastive_losses/ContrastiveLossTask.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+node_set_name + +Name of the node set for readout. +
+feature_name + +Feature name for readout. +
+representations_layer_name + +Layer name for uncorrupted representations. +
+corruptor + +Corruptor instance for creating negative samples. If not +specified, we use ShuffleFeaturesGlobally by default. +
+projector_units + +Sequence of layer sizes for projector network. +Projectors prevent dimensional collapse, but can hinder training for +easy corruptions. For more details, see +https://arxiv.org/abs/2304.12210. +
+seed + +Random seed for the default corruptor (ShuffleFeaturesGlobally). +
+ +## Methods + +

losses

+ +View +source + + + +Returns arbitrary task specific losses. + +

make_contrastive_layer

+ +View +source + + + +Returns the layer contrasting clean outputs with the correupted ones. + +

metrics

+ +View +source + + + +Returns arbitrary task specific metrics. + +

predict

+ +View +source + + + +Apply a readout head for use with triplet contrastive loss. + + + + + + + + + + + +
Args
+*args + +A tuple of (anchor, positive_sample, negative_sample) +tfgnn.GraphTensors. +
+ + + + + + + + + + +
Returns
+The positive and negative distance embeddings for triplet loss as produced +by the implementing subclass. +
+ +

preprocess

+ +View +source + + + +Creates unused pseudo-labels. + +The input tensor should have the anchor and positive sample stacked along the +first dimension for each feature within each node set. The corruptor is applied +on the positive sample. + + + + + + + + + + + +
Args
+inputs + +The anchor and positive sample stack along the first axis. +
+ + + + + + + + + + +
Returns
+Sequence of three graph tensors (anchor, positive_sample, +corrupted_sample) and unused pseudo-labels. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/VicRegTask.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/VicRegTask.md new file mode 100644 index 00000000..cf466b46 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/VicRegTask.md @@ -0,0 +1,169 @@ +# contrastive_losses.VicRegTask + + + + + View source +on GitHub + +A VICReg Task. + +Inherits From: +[`ContrastiveLossTask`](../contrastive_losses/ContrastiveLossTask.md) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+node_set_name + +Name of the node set for readout. +
+feature_name + +Feature name for readout. +
+representations_layer_name + +Layer name for uncorrupted representations. +
+corruptor + +Corruptor instance for creating negative samples. If not +specified, we use ShuffleFeaturesGlobally by default. +
+projector_units + +Sequence of layer sizes for projector network. +Projectors prevent dimensional collapse, but can hinder training for +easy corruptions. For more details, see +https://arxiv.org/abs/2304.12210. +
+seed + +Random seed for the default corruptor (ShuffleFeaturesGlobally). +
+ +## Methods + +

losses

+ +View +source + + + +Returns arbitrary task specific losses. + +

make_contrastive_layer

+ +View +source + + + +Returns the layer contrasting clean outputs with the correupted ones. + +

metrics

+ +View +source + + + +Returns arbitrary task specific metrics. + +

predict

+ +View +source + + + +Apply a readout head for use with various contrastive losses. + + + + + + + + + + + +
Args
+*args + +A tuple of (clean, corrupted) tfgnn.GraphTensors. +
+ + + + + + + + + + +
Returns
+The logits for some contrastive loss as produced by the implementing +subclass. +
+ +

preprocess

+ +View +source + + + +Applies a `Corruptor` and returns empty pseudo-labels. diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/all_symbols.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/all_symbols.md new file mode 100644 index 00000000..72b37858 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/all_symbols.md @@ -0,0 +1,24 @@ +# All symbols in TensorFlow GNN Models: fContrastiveLosses + + + +## Primary symbols + +* contrastive_losses +* contrastive_losses.AllSvdMetrics +* contrastive_losses.BarlowTwinsTask +* contrastive_losses.ContrastiveLossTask +* contrastive_losses.CorruptionSpec +* contrastive_losses.Corruptor +* contrastive_losses.DeepGraphInfomaxLogits +* contrastive_losses.DeepGraphInfomaxTask +* contrastive_losses.DropoutFeatures +* contrastive_losses.ShuffleFeaturesGlobally +* contrastive_losses.TripletEmbeddingSquaredDistances +* contrastive_losses.TripletLossTask +* contrastive_losses.VicRegTask +* contrastive_losses.coherence +* contrastive_losses.numerical_rank +* contrastive_losses.pseudo_condition_number +* contrastive_losses.rankme +* contrastive_losses.self_clustering diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/coherence.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/coherence.md new file mode 100644 index 00000000..66ffc367 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/coherence.md @@ -0,0 +1,69 @@ +# contrastive_losses.coherence + + + + + View source +on GitHub + +Coherence metric implementation. + + + + + +Coherence measures how easy it is to construct a linear classifier on top of +data without knowing downstream labels. Refer to +https://arxiv.org/abs/2305.16562 for more details. + + + + + + + + + + + + + + + + + +
+representations + +Input representations, a rank-2 tensor. +
+sigma + +Unused. +
+u + +An optional tensor with left singular vectors of representations. If not +present, computes a SVD of representations. +
+ + + + + + + + + + +
+Metric value as scalar tf.Tensor. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/numerical_rank.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/numerical_rank.md new file mode 100644 index 00000000..567971ef --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/numerical_rank.md @@ -0,0 +1,70 @@ +# contrastive_losses.numerical_rank + + + + + View source +on GitHub + +Numerical rank implementation. + + + + + +Computes a metric that estimates the numerical column rank of a matrix. Rank is +estimated as a matrix trace divided by the largest eigenvalue. When our matrix +is a covariance matrix, we can compute both the trace and the largest eigenvalue +efficiently via SVD as the largest singular value squared. + + + + + + + + + + + + + + + + + +
+representations + +Input representations. We expect rank 2 input. +
+sigma + +An optional tensor with singular values of representations. If not +present, computes SVD (singular values only) of representations. +
+u + +Unused. +
+ + + + + + + + + + +
+Metric value as scalar tf.Tensor. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/pseudo_condition_number.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/pseudo_condition_number.md new file mode 100644 index 00000000..1ece1671 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/pseudo_condition_number.md @@ -0,0 +1,69 @@ +# contrastive_losses.pseudo_condition_number + + + + + View source +on GitHub + +Pseudo-condition number metric implementation. + + + + + +Computes a metric that measures the decay rate of the singular values. NOTE: Can +be unstable in practice, when using small batch sizes, leading to numerical +instabilities. + + + + + + + + + + + + + + + + + +
+representations + +Input representations. We expect rank 2 input. +
+sigma + +An optional tensor with singular values of representations. If not +present, computes SVD (singular values only) of representations. +
+u + +Unused. +
+ + + + + + + + + + +
+Metric value as scalar tf.Tensor. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/rankme.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/rankme.md new file mode 100644 index 00000000..90223259 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/rankme.md @@ -0,0 +1,77 @@ +# contrastive_losses.rankme + + + + + View source +on GitHub + +RankMe metric implementation. + + + + + +Computes a metric that measures the decay rate of the singular values. For the +paper, see https://arxiv.org/abs/2210.02885. + + + + + + + + + + + + + + + + + + + + +
+representations + +Input representations as rank-2 tensor. +
+sigma + +An optional tensor with singular values of representations. If not +present, computes SVD (singular values only) of representations. +
+u + +Unused. +
+epsilon + +Epsilon for numerican stability. +
+ + + + + + + + + + +
+Metric value as scalar tf.Tensor. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/self_clustering.md b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/self_clustering.md new file mode 100644 index 00000000..474189d5 --- /dev/null +++ b/tensorflow_gnn/docs/api_docs/python/models/contrastive_losses/self_clustering.md @@ -0,0 +1,63 @@ +# contrastive_losses.self_clustering + + + + + View source +on GitHub + +Self-clustering metric implementation. + + + + + +Computes a metric that measures how well distributed representations are, if +projected on the unit sphere. If `subtract_mean` is True, we additionally remove +the mean from representations. The metric has a range of (-0.5, 1]. It achieves +its maximum of 1 if representations collapse to a single point, and it is +approximately 0 if representations are distributed randomly on the sphere. In +theory, it can achieve negative values if the points are maximally equiangular, +although this is very rare in practice. Refer to +https://arxiv.org/abs/2305.16562 for more details. + + + + + + + + + + + + + + +
+representations + +Input representations. +
+subtract_mean + +Whether to subtract the mean from representations. +
+ + + + + + + + + + +
+Metric value as scalar tf.Tensor. +
diff --git a/tensorflow_gnn/docs/api_docs/python/models/gat_v2.md b/tensorflow_gnn/docs/api_docs/python/models/gat_v2.md index 6a8cf186..2292e9c9 100644 --- a/tensorflow_gnn/docs/api_docs/python/models/gat_v2.md +++ b/tensorflow_gnn/docs/api_docs/python/models/gat_v2.md @@ -1,17 +1,10 @@ # Module: gat_v2 -[TOC] - - - - + + View source +on GitHub Graph Attention Networks v2. diff --git a/tensorflow_gnn/docs/api_docs/python/models/gat_v2/GATv2Conv.md b/tensorflow_gnn/docs/api_docs/python/models/gat_v2/GATv2Conv.md index 677ce5e8..361de99b 100644 --- a/tensorflow_gnn/docs/api_docs/python/models/gat_v2/GATv2Conv.md +++ b/tensorflow_gnn/docs/api_docs/python/models/gat_v2/GATv2Conv.md @@ -1,17 +1,10 @@ # gat_v2.GATv2Conv -[TOC] - - - - + + View source +on GitHub The multi-head attention from Graph Attention Networks v2 (GATv2). @@ -86,77 +79,81 @@ example, if the input features have shape `[num_nodes, 2, 4, 1]`, then it will perform an identical computation on each of the `num_nodes * 2 * 4` input values. +This layer can be restored from config by `tf.keras.models.load_model()` when +saved as part of a Keras model using `save_format="tf"`. + +
-`num_heads` +num_heads The number of attention heads.
-`per_head_channels` +per_head_channels The number of channels for each attention head. This means: - if `heads_merge_type == "concat"`, then final output size will be: - `per_head_channels * num_heads`. - if `heads_merge_type == "mean"`, then final output size will be: - `per_head_channels`. + if heads_merge_type == "concat", then final output size will be: + per_head_channels * num_heads. + if heads_merge_type == "mean", then final output size will be: + per_head_channels.
-`receiver_tag` +receiver_tag -one of `tfgnn.SOURCE`, `tfgnn.TARGET` or `tfgnn.CONTEXT`. +one of tfgnn.SOURCE, tfgnn.TARGET or tfgnn.CONTEXT. The results of attention are aggregated for this graph piece. -If set to `tfgnn.SOURCE` or `tfgnn.TARGET`, the layer can be called for +If set to tfgnn.SOURCE or tfgnn.TARGET, the layer can be called for an edge set and will aggregate results at the specified endpoint of the edges. -If set to `tfgnn.CONTEXT`, the layer can be called for an edge set or +If set to tfgnn.CONTEXT, the layer can be called for an edge set or node set. If left unset for init, the tag must be passed at call time.
-`receiver_feature` +receiver_feature -Can be set to override `tfgnn.HIDDEN_STATE` for use as +Can be set to override tfgnn.HIDDEN_STATE for use as the receiver's input feature to attention. (The attention key is derived from this input.)
-`sender_node_feature` +sender_node_feature -Can be set to override `tfgnn.HIDDEN_STATE` for use as +Can be set to override tfgnn.HIDDEN_STATE for use as the input feature from sender nodes to attention. -IMPORTANT: Must be set to `None` for use with `receiver_tag=tfgnn.CONTEXT` +IMPORTANT: Must be set to None for use with receiver_tag=tfgnn.CONTEXT on an edge set, or for pooling from edges without sender node states.
-`sender_edge_feature` +sender_edge_feature Can be set to a feature name of the edge set to select -it as an input feature. By default, this set to `None`, which disables +it as an input feature. By default, this set to None, which disables this input. -IMPORTANT: Must be set for use with `receiver_tag=tfgnn.CONTEXT` +IMPORTANT: Must be set for use with receiver_tag=tfgnn.CONTEXT on an edge set.
-`use_bias` +use_bias If true, a bias term is added to the transformations of query and @@ -164,7 +161,7 @@ value inputs.
-`edge_dropout` +edge_dropout Can be set to a dropout rate for edge dropout. (When pooling @@ -173,29 +170,29 @@ is dropped out.)
-`attention_activation` +attention_activation The nonlinearity used on the transformed inputs before multiplying with the trained weights of the attention layer. This can be specified as a Keras layer, a tf.keras.activations.* -function, or a string understood by `tf.keras.layers.Activation()`. +function, or a string understood by tf.keras.layers.Activation(). Defaults to "leaky_relu", which in turn defaults to a negative slope -of `alpha=0.2`. +of alpha=0.2.
-`heads_merge_type` +heads_merge_type The merge operation for combining output from -all `num_heads` attention heads. By default, output of heads will be +all num_heads attention heads. By default, output of heads will be concatenated. However, GAT paper (Velickovic et al, Eq 6) recommends *only for output layer* to do mean across attention heads, which is acheivable -by setting to `"mean"`. +by setting to "mean".
-`activation` +activation The nonlinearity applied to the final result of attention, @@ -203,17 +200,17 @@ specified in the same ways as attention_activation.
-`kernel_initializer` +kernel_initializer -Can be set to a `kernel_initializer` as understood -by `tf.keras.layers.Dense` etc. -An `Initializer` object gets cloned before use to ensure a fresh seed, -if not set explicitly. For more, see `tfgnn.keras.clone_initializer()`. +Can be set to a kernel_initializer as understood +by tf.keras.layers.Dense etc. +An Initializer object gets cloned before use to ensure a fresh seed, +if not set explicitly. For more, see tfgnn.keras.clone_initializer().
-`kernel_regularizer` +kernel_regularizer If given, will be used to regularize all layer kernels. @@ -222,56 +219,58 @@ If given, will be used to regularize all layer kernels.
+ - + + +
`receiver_tag` one of -`tfgnn.SOURCE`, `tfgnn.TARGET` or `tfgnn.CONTEXT`. The results are aggregated -for this graph piece. If set to `tfgnn.SOURCE` or `tfgnn.TARGET`, the layer can -be called for an edge set and will aggregate results at the specified endpoint -of the edges. If set to `tfgnn.CONTEXT`, the layer can be called for an edge set -or a node set and will aggregate results for context (per graph component). If -left unset for init, the tag must be passed at call time.
-`receiver_feature` The name of the -feature that is read from the receiver graph piece and passed as -convolve(receiver_input=...).
-`sender_node_feature` The name of the -feature that is read from the sender nodes, if any, and passed as -convolve(sender_node_input=...). NOTICE this must be `None` for use with -`receiver_tag=tfgnn.CONTEXT` on an edge set, or for pooling from edges without -sender node states.
-`sender_edge_feature` The name of the -feature that is read from the sender edges, if any, and passed as -convolve(sender_edge_input=...). NOTICE this must not be `None` for use with -`receiver_tag=tfgnn.CONTEXT` on an edge set.
-`extra_receiver_ops` A str-keyed -dictionary of Python callables that are wrapped to bind some arguments and then -passed on to `convolve()`. Sample usage: `extra_receiver_ops={"softmax": -tfgnn.softmax}`. The values passed in this dict must be callable as follows, -with two positional arguments: +
receiver_tag one of +tfgnn.SOURCE, tfgnn.TARGET or +tfgnn.CONTEXT. The results are aggregated for this graph piece. If +set to tfgnn.SOURCE or tfgnn.TARGET, the layer can be +called for an edge set and will aggregate results at the specified endpoint of +the edges. If set to tfgnn.CONTEXT, the layer can be called for an +edge set or a node set and will aggregate results for context (per graph +component). If left unset for init, the tag must be passed at call time.
receiver_feature The name of the feature that is read from the receiver graph piece and +passed as convolve(receiver_input=...).
+sender_node_feature The +name of the feature that is read from the sender nodes, if any, and passed as +convolve(sender_node_input=...). NOTICE this must be None for use +with receiver_tag=tfgnn.CONTEXT on an edge set, or for pooling from +edges without sender node states.
+sender_edge_feature The +name of the feature that is read from the sender edges, if any, and passed as +convolve(sender_edge_input=...). NOTICE this must not be None for +use with receiver_tag=tfgnn.CONTEXT on an edge set.
extra_receiver_ops A +str-keyed dictionary of Python callables that are wrapped to bind some arguments +and then passed on to convolve(). Sample usage: +extra_receiver_ops={"softmax": tfgnn.softmax}. The values passed in +this dict must be callable as follows, with two positional arguments: ```python f(graph, receiver_tag, node_set_name=..., feature_value=..., ...) f(graph, receiver_tag, edge_set_name=..., feature_value=..., ...) ``` -The wrapped callables seen by `convolve()` can be called like +The wrapped callables seen by convolve() can be called like ```python wrapped_f(feature_value, ...) ``` -The first three arguments of `f` are set to the input GraphTensor of -the layer and the tag/name pair required by `tfgnn.broadcast()` and -`tfgnn.pool()` to move values between the receiver and the messages that +The first three arguments of f are set to the input GraphTensor of +the layer and the tag/name pair required by tfgnn.broadcast() and +tfgnn.pool() to move values between the receiver and the messages that are computed inside the convolution. The sole positional argument of -`wrapped_f()` is passed to `f()` as `feature_value=`, and any keyword +wrapped_f() is passed to f() as feature_value=, and any keyword arguments are forwarded.
-`**kwargs` +**kwargs Forwarded to the base class tf.keras.layers.Layer. @@ -280,30 +279,31 @@ Forwarded to the base class tf.keras.layers.Layer.
+
-`takes_receiver_input` +takes_receiver_input -If `False`, all calls to convolve() will get `receiver_input=None`. +If False, all calls to convolve() will get receiver_input=None.
-`takes_sender_edge_input` +takes_sender_edge_input -If `False`, all calls to convolve() will get `sender_edge_input=None`. +If False, all calls to convolve() will get sender_edge_input=None.
-`takes_sender_node_input` +takes_sender_node_input -If `False`, all calls to convolve() will get `sender_node_input=None`. +If False, all calls to convolve() will get sender_node_input=None.
@@ -312,7 +312,7 @@ If `False`, all calls to convolve() will get `sender_node_input=None`.

convolve

-View +View source