Configuration Parameters¶
Detailed explanation of the parameters used in the YAML configuration file. The configuration is structured into different sections for ease of use.
Data¶
-
networks
: (List of Strings) Specifies the network topologies to use during training- Example:
["case300_ieee", "case30_ieee"]
- Example:
-
scenarios
: (List of Integers) Defines the number of scenarios to use for each network specified. -
Example:
[8500, 4000]
-
normalization
: (String) Normalization method for data.- Options:
minmax
: Scales data between the minimum and maximum values.standard
: Standardizes data to have zero mean and unit variance.baseMVAnorm
: Divides data by a baseMVA value, which is the maximum active/reactive power across the network.identity
: Leaves data unchanged.
- Example:
"baseMVAnorm"
- Options:
-
baseMVA
: (Integer) The base MVA value specified in the original matpower casefile, needed forbaseMVAnorm
normalization.- Default:
100
- Default:
-
mask_type
: (String) Masking strategy.- Options:
rnd
: Mask each feature with a certain probability that needs to be specified (mask_ratio
).pf
: Power flow problem setup.opf
: Optimal power flow problem setup.none
: No masking.
- Example:
"rnd"
- Options:
-
mask_value
: (Float) Value used to mask data during training.- Default:
0.0
- Default:
-
mask_ratio
: (Float) Propability of each feature to be masked, needs to be specified only when mask_type isrnd
.- Default:
0.5
- Default:
-
mask_dim
: (Integer) Number of features to mask.- Default:
6
(Pd, Qd, Pg, Qg, Vm, Va)
- Default:
-
learn_mask
: (Boolean) Specifies whether the mask value is learnable.- Default:
False
- Default:
-
val_ratio
: (Float) Fraction of data used for validation.- Default:
0.1
- Default:
-
test_ratio
: (Float) Fraction of data used for testing.- Default:
0.1
- Default:
Model¶
-
type
: (String) Specifies the type of model architecture.- Example:
"GPSconv"
- Example:
-
input_dim
: (Integer) Input dimensionality of the model.- Default:
9
(Pd, Qd, Pg, Qg, Vm, Va, PQ, PV, REF)
- Default:
-
output_dim
: (Integer) Output dimensionality of the model.- Default:
6
(Pd, Qd, Pg, Qg, Vm, Va)
- Default:
-
edge_dim
: (Integer) Dimensionality of edge features.- Default:
2
(G, B)
- Default:
-
pe_dim
: (Integer) Dimensionality of positional encoding.- Example:
20
(Length of random walk)
- Example:
-
num_layers
: (Integer) Number of layers in the model.- Example:
6
- Example:
-
hidden_size
: (Integer) Size of hidden layers.- Example:
256
- Example:
-
attention_head
: (Integer) Number of attention heads in the model.- Example:
8
- Example:
-
dropout
: (Float) Model dropout probability- Default:
0.0
- Default:
Training¶
-
batch_size
: (Integer) Number of samples per training batch.- Example:
16
- Example:
-
epochs
: (Integer) Number of training epochs.- Example:
100
- Example:
-
losses
: (List of Strings) Specifies the loss functions to use during training.- Available options:
MSE
: Mean Squared Error.MaskedMSE
: Masked Mean Squared Error.SCE
: Scaled Cosine Error.PBE
: Power Balance Equation loss.
- Example:
["MaskedMSE", "PBE"]
- Available options:
-
loss_weights
: (List of Floats) Specifies the relative weights for each loss function.- Example:
[0.01, 0.99]
- Example:
Optimizer¶
-
learning_rate
: (Float) Learning rate for the optimizer.- Example:
0.0001
- Example:
-
beta1
: (Float) Beta1 parameter for the Adam optimizer.- Default:
0.9
- Default:
-
beta2
: (Float) Beta2 parameter for the Adam optimizer.- Default:
0.999
- Default:
-
lr_decay
: (Float) Learning rate decay factor. ` -
lr_patience
: (Integer) Number of epochs to wait before applying learning rate decay.- Example:
3
- Example:
Callbacks¶
-
patience
: (Integer) Number of epochs to wait before early stopping. A value of-1
disables early stopping.- Default:
-1
- Default:
-
tol
: (Float) Tolerance for validation loss comparison during early stopping.- Default:
0
- Default:
Verbose¶
verbose
: (Boolean) Provides detailed analysis after training.- Default:
False
- Default: