General Utilities#
General Utilities Module#
This module provides general utility functions for influencer games. It includes functions for matrix operations, learning rate calculations, resource parameter setups, agent position setups, and statistical computations. These utilities are used across various components of the influencer games framework.
Dependencies:#
NumPy
PyTorch
Matplotlib
Usage:#
The matrix_builder function is used to build or append rows to a matrix, while the learning_rate function computes learning rates based on iteration and type. The agent_position_setup function initializes agent positions in different domains, and the discrete_mean function computes the mean of a discrete distribution.
Example:#
from InflGame.utils.general import matrix_builder, learning_rate, discrete_mean
import torch
import numpy as np
# Build a matrix incrementally
row1 = torch.tensor([1.0, 2.0, 3.0])
matrix = matrix_builder(row_id=0, row=row1)
# Calculate learning rate with cosine annealing
lr = learning_rate(
iter=10,
learning_rate_type='cosine_annealing',
learning_rate=[0.0001, 0.01, 100]
)
# Compute discrete mean
bin_points = torch.tensor([0.1, 0.3, 0.5, 0.7, 0.9])
resources = torch.tensor([1.0, 2.0, 3.0, 2.0, 1.0])
mean = discrete_mean(bin_points, resources)
Functions
- InflGame.utils.general.agent_optimal_position_setup(num_agents, agents_pos, infl_type, mean, domain_type, ids)#
Sets up optimal agent/player positions based on influence type and domain.
This function computes optimal positions for agents given the influence function type and domain constraints. Some agents can retain their current positions while others are optimized.
Example:
import numpy as np from InflGame.utils.general import agent_optimal_position_setup current_pos = np.array([0.2, 0.5, 0.8]) optimal_pos = agent_optimal_position_setup( num_agents=3, agents_pos=current_pos, infl_type='gaussian', mean=0.5, domain_type='1d', ids=[0] # Keep first agent fixed )
- Parameters:
num_agents (int) – Number of agents.
agents_pos (np.ndarray) – Current positions of agents.
infl_type (str) – Influence type (‘gaussian’, ‘dirichlet’, etc.).
mean (float) – Mean position for non-specified agents.
domain_type (str) – Domain type (‘1d’, ‘2d’, or ‘simplex’).
ids (List[int]) – List of agent IDs to retain their positions.
- Returns:
Optimal agent/player positions.
- Return type:
np.ndarray
- InflGame.utils.general.agent_parameter_setup(num_agents, infl_type, setup_type, reach=None, reach_start=0.01, reach_end=0.99, reach_num_points=100)#
Sets up agent parameters based on the specified setup type.
- Parameters:
num_agents (int) – Number of agents.
infl_type (str) – Influence type (‘gaussian’, ‘dirichlet’, etc.).
setup_type (str) – Setup type (‘initial_symmetric_setup’ or ‘parameter_space’).
reach (float, optional) – Reach value for symmetric setup. Defaults to None.
reach_start (float) – Start value for reach in parameter space.
reach_end (float) – End value for reach in parameter space.
reach_num_points (int) – Number of points for reach in parameter space.
- Returns:
agent parameters.
- Return type:
np.ndarray or torch.Tensor
- InflGame.utils.general.agent_position_setup(num_agents, setup_type, domain_type, domain_bounds, dimensions=None, bound_lower=0.1, bound_upper=0.9)#
Sets up agent/player positions based on the specified domain and setup type.
This function initializes agent positions within the specified domain bounds. It supports various domain types including 1D line segments, 2D rectangles, and simplex domains with barycentric coordinates.
Domain Types:
'1d': Positions agents along a line segment'2d': Positions agents in a rectangular domain'simplex': Positions agents in a simplex with barycentric coordinates
Setup Types:
'initial_symmetric_setup': Distributes agents symmetrically'paper_default': Uses default positions from published work
Example:
import numpy as np from InflGame.utils.general import agent_position_setup # Setup 3 agents in 1D domain positions = agent_position_setup( num_agents=3, setup_type='initial_symmetric_setup', domain_type='1d', domain_bounds=np.array([0, 1]) )
- Parameters:
num_agents (int) – Number of agents.
setup_type (str) – Setup type (‘initial_symmetric_setup’ or ‘paper_default’).
domain_type (str) – Domain type (‘1d’, ‘2d’, or ‘simplex’).
domain_bounds (np.ndarray) – Bounds of the domain.
dimensions (int, optional) – Number of dimensions for simplex. Defaults to None.
bound_lower (float) – Lower bound for positions. Defaults to 0.1.
bound_upper (float) – Upper bound for positions. Defaults to 0.9.
- Returns:
Agent/player positions as tensor.
- Return type:
Union[np.ndarray, torch.Tensor]
- InflGame.utils.general.discrete_covariance(bin_points_1, bin_points_2, resource_distribution, mean_1, mean_2)#
Computes the covariance of a discrete 2D distribution.
\[\text{Cov}(b_1, b_2) = \frac{\sum_{b \in \mathbb{B}} b_1 \cdot b_2 \cdot B(b)}{\sum_{b \in \mathbb{B}} B(b)} - \mu_1 \cdot \mu_2\]- where:
\(b_1\) and \(b_2\) are the bin points from two distributions.
\(\mathbb{B}\) is the set of bin points.
\(B(b)\) is the resource value at the bin point \(b\).
\(\mu_1\) and \(\mu_2\) are the means of the two distributions.
Example:
import torch from InflGame.utils.general import discrete_covariance bins_x = torch.tensor([0.1, 0.3, 0.5, 0.7, 0.9]) bins_y = torch.tensor([0.2, 0.4, 0.5, 0.6, 0.8]) resources = torch.tensor([1.0, 2.0, 3.0, 2.0, 1.0]) cov = discrete_covariance(bins_x, bins_y, resources, 0.5, 0.5)
- Parameters:
bin_points_1 (Union[np.ndarray, torch.Tensor]) – First set of bin points.
bin_points_2 (Union[np.ndarray, torch.Tensor]) – Second set of bin points.
resource_distribution (Union[np.ndarray, torch.Tensor]) – Resource distribution.
mean_1 (float) – Mean of the first distribution.
mean_2 (float) – Mean of the second distribution.
- Returns:
Covariance of the distribution.
- Return type:
torch.Tensor
- InflGame.utils.general.discrete_mean(bin_points, resource_distribution)#
Computes the mean of a discrete distribution using torch operations.
\[\mu = \frac{\sum_{b\in \mathbb{B}} b_i \cdot B(b)}{\sum_{b\in\mathbb{B}} B(b)}\]- where:
\(b\) is the bin point.
\(\mathbb{B}\) is the set of bin points.
\(B(b)\) is the resource value at the bin point \(b\).
- Parameters:
bin_points (Union[np.ndarray, torch.Tensor]) – Bin points.
resource_distribution (Union[np.ndarray, torch.Tensor]) – Resource distribution.
- Returns:
Mean of the distribution.
- Return type:
torch.Tensor
- InflGame.utils.general.discrete_variance(bin_points, resource_distribution, mean)#
Computes the variance of a discrete distribution.
\[\sigma^2 = \frac{\sum_{b \in \mathbb{B}} b^2 \cdot B(b)}{\sum_{b \in \mathbb{B}} B(b)} - \mu^2\]- where:
\(b\) is the bin point.
\(\mathbb{B}\) is the set of bin points.
\(B(b)\) is the resource value at the bin point \(b\).
\(\mu\) is the mean of the distribution.
Example:
import torch from InflGame.utils.general import discrete_mean, discrete_variance bins = torch.tensor([0.1, 0.3, 0.5, 0.7, 0.9]) resources = torch.tensor([1.0, 2.0, 3.0, 2.0, 1.0]) mean = discrete_mean(bins, resources) variance = discrete_variance(bins, resources, mean)
- Parameters:
bin_points (Union[np.ndarray, torch.Tensor]) – Bin points.
resource_distribution (Union[np.ndarray, torch.Tensor]) – Resource distribution.
mean (float) – Mean of the distribution.
- Returns:
Variance of the distribution.
- Return type:
torch.Tensor
- InflGame.utils.general.figure_directory(fig_parameters, alt_name)#
Creates a directory structure for saving figures.
This function generates a hierarchical directory structure based on the provided figure parameters, ensuring the necessary folders exist for organizing saved visualizations.
Example:
from InflGame.utils.general import figure_directory fig_params = ['section_A', 'bifurcation', 3] dir_path = figure_directory(fig_params, alt_name=False)
- Parameters:
fig_parameters (List) – Parameters for the figure (section, type, number of players).
alt_name (bool) – Whether to use an alternative naming scheme.
- Returns:
Path to the final directory.
- Return type:
str
- InflGame.utils.general.figure_final_name(fig_parameters, name_ads, save_types)#
Generates final file paths for figures.
This function combines directory paths and filenames to create complete file paths for saving figures.
Example:
from InflGame.utils.general import figure_final_name fig_params = ['section_A', 'equilibrium_bifurcation', 3] paths = figure_final_name( fig_params, name_ads=['run_1'], save_types=['.png', '.svg'] )
- Parameters:
fig_parameters (List) – Parameters for the figure.
name_ads (List[str]) – Additional names to append.
save_types (List[str]) – File extensions for saving.
- Returns:
List of full file paths for the figures.
- Return type:
List[str]
- InflGame.utils.general.figure_name(fig_parameters, name_ads, save_types)#
Generates figure names based on parameters and save types.
This function creates descriptive filenames for saved figures based on the figure type and optional additional naming components.
Example:
from InflGame.utils.general import figure_name fig_params = ['section_A', 'equilibrium_bifurcation', 3] names = figure_name( fig_params, name_ads=['alpha_0.5'], save_types=['.png', '.svg'] )
- Parameters:
fig_parameters (List) – Parameters for the figure.
name_ads (List[str]) – Additional names to append.
save_types (List[str]) – File extensions for saving.
- Returns:
List of figure names with extensions.
- Return type:
List[str]
- InflGame.utils.general.flatten_list(xss)#
Flattens a list of lists into a single list.
This function takes a nested list structure and returns a single-level list containing all elements from the sublists in order.
Example:
nested = [[1, 2], [3, 4], [5]] result = flatten_list(nested) # Returns: [1, 2, 3, 4, 5]
- Parameters:
xss (list) – A list containing sublists.
- Returns:
A single flattened list containing all elements from the sublists.
- Return type:
list
- InflGame.utils.general.generate_color_palette(num_colors, color_scheme='default')#
Generate a list of colors for a given number of items.
This function creates a color palette suitable for distinguishing multiple agents or data series in visualizations.
Example:
from InflGame.utils.general import generate_color_palette # Generate 5 colors from bright scheme palette = generate_color_palette(5, 'bright') # Use in plotting for i, color in enumerate(palette): plt.plot(data[i], color=color, label=f'Agent {i}')
- Parameters:
num_colors (int) – Number of colors to generate.
color_scheme (str) – Color scheme to use.
- Returns:
List of color codes.
- Return type:
List[str]
- Raises:
ValueError – If num_colors is not positive.
- InflGame.utils.general.get_color_by_index(index, color_scheme='default')#
Return a color based on an integer index.
This function provides consistent color mapping for visualization purposes. Colors cycle through the selected scheme if the index exceeds available colors.
Available Color Schemes:
'default': Standard color palette'matplotlib': Matplotlib tab10 colors'bright': High-contrast bright colors'pastel': Soft pastel colors'colormap': Viridis colormap'Greys': Grayscale colors
Example:
from InflGame.utils.general import get_color_by_index # Get the first color in default scheme color = get_color_by_index(0, 'default') # Get colors for multiple agents colors = [get_color_by_index(i, 'bright') for i in range(3)]
- Parameters:
index (int) – Integer index to determine color.
color_scheme (str) – Color scheme to use.
- Returns:
Hex color code or matplotlib color name.
- Return type:
str
- Raises:
ValueError – If color_scheme is not supported.
- InflGame.utils.general.learning_rate(iter, learning_rate_type, learning_rate, gradient=None)#
Learning Rate Types# Learning Rate Type
Associated String
Description
Cosine Annealing
'cosine_annealing'Smoothly decreases the learning rate using a cosine function.
Fixed
'fixed'Keeps the learning rate constant throughout.
Trust Region
'trust_region'Adapts learning rate based on trust region radius with exponential decay.
The learning rate is computed based on the specified type:
Cosine Annealing:
\[\eta_t = \eta_{\text{min}} + \frac{1}{2} (\eta_{\text{max}} - \eta_{\text{min}}) \left(1 + \cos\left(\frac{\pi \cdot t}{T}\right)\right)\]where \(\eta_t\) is the learning rate at iteration \(t\), \(\eta_{\text{min}}\) is the minimum learning rate, \(\eta_{\text{max}}\) is the maximum learning rate, and \(T\) is the total number of iterations.
Fixed: The learning rate remains constant.
\[\eta_t = \eta_{\text{fixed}}\]Trust Region: The learning rate adapts based on trust region radius.
\[\eta_t = \eta_{\text{initial}} \cdot \max\left(\eta_{\text{min\_factor}}, \exp\left(-\frac{t}{\tau}\right)\right)\]where \(\eta_{\text{initial}}\) is the initial learning rate, \(\eta_{\text{min\_factor}}\) is the minimum learning rate factor, \(\tau\) is the decay time constant, and \(t\) is the current iteration.
- Parameters:
iter (int) – The current iteration.
learning_rate_type (str) – The type of learning rate (‘cosine_annealing’, ‘fixed’, or ‘trust_region’).
learning_rate (list, np.ndarray, or float) – Learning rate parameters. For trust_region: [initial_lr, min_factor, decay_constant]
- Returns:
The computed learning rate.
- Return type:
float
- InflGame.utils.general.matrix_builder(row_id, row, matrix=None)#
Builds or appends rows to a matrix.
This function is used to construct a matrix by adding rows iteratively. It supports three cases: 1. If the matrix is empty (matrix=None), the function initializes the matrix with the given row. 2. If the matrix has one row, the function stacks the new row vertically to create a two-row matrix. 3. If the matrix already has multiple rows, the function appends the new row to the existing matrix.
Behavior: - The function ensures that the dimensions of the new row match the existing matrix. - The new row is reshaped and concatenated to the matrix in a way that preserves the matrix’s structure.
Examples:
import torch import numpy as np # Example 1: Initialize a matrix with the first row row_1 = torch.tensor([1, 2, 3]) matrix = matrix_builder(row_id=0, row=row_1) print(matrix) # Output: tensor([1, 2, 3]) # Example 2: Add a second row to the matrix row_2 = torch.tensor([4, 5, 6]) matrix = matrix_builder(row_id=1, row=row_2, matrix=matrix) print(matrix) # Output: # tensor([[1, 2, 3], # [4, 5, 6]]) # Example 3: Append a third row to the matrix row_3 = torch.tensor([7, 8, 9]) matrix = matrix_builder(row_id=2, row=row_3, matrix=matrix) print(matrix) # Output: # tensor([[1, 2, 3], # [4, 5, 6], # [7, 8, 9]])
Edge Cases: - If the row dimensions do not match the existing matrix, the function will raise an error. - If the matrix is None, the function initializes it with the given row.
- Parameters:
row_id (int) – The index of the row being added.
row (torch.Tensor) – The row to be added.
matrix (torch.tensor, optional) – The existing matrix. Defaults to None.
- Returns:
The updated matrix with the new row added.
- Return type:
torch.Tensor
- InflGame.utils.general.organize_array(arr)#
Organizes an array by alternating elements from the start and end.
This function reorders the input array by alternating between elements from the beginning and end of the array, moving towards the center.
Example:
arr = [1, 2, 3, 4, 5] result = organize_array(arr) # Returns: [1, 5, 2, 4, 3]
- Parameters:
arr (list) – Input array.
- Returns:
Organized array with alternating elements.
- Return type:
list
- InflGame.utils.general.resource_parameter_setup(resource_distribution_type='multi_modal_gaussian_distribution_1D', varying_parameter_type='mean', fixed_parameters_lst=[[0.1, 0.1], [1, 1]], alpha_st=0, alpha_end=1, alpha_num_points=100)#
Sets up resource distribution parameters based on the specified type.
- Parameters:
resource_distribution_type (str) – Type of resource distribution.
varying_parameter_type (str) – Parameter to vary (‘mean’ or others).
fixed_parameters_lst (list) – Fixed parameters for the distribution.
alpha_st (float) – Start value for alpha.
alpha_end (float) – End value for alpha.
alpha_num_points (int) – Number of alpha points.
- Returns:
A tuple containing the parameter list and alpha values.
- Return type:
tuple
- InflGame.utils.general.smoothing_zeros(tensor, fill_value=None, inplace=False)#
Optimized function to smooth zeros at the beginning and end of a 1D tensor.
Fills leading zeros with the first non-zero value and trailing zeros with the last non-zero value. This is useful for cleaning up time series data or trajectory data with missing values at the boundaries.
Edge Cases Handled:
Empty tensor: returns empty tensor
All-zero tensor: fills with fill_value or returns unchanged
Single non-zero element: fills entire tensor with that value
No leading/trailing zeros: returns original tensor
Single element tensor: returns unchanged
Examples:
import torch from InflGame.utils.general import smoothing_zeros # Basic smoothing result = smoothing_zeros(torch.tensor([0, 3, 2, 0])) # Returns: tensor([3, 3, 2, 2]) # All-zero tensor with fill value result = smoothing_zeros(torch.tensor([0, 0, 0, 0]), fill_value=1.0) # Returns: tensor([1., 1., 1., 1.])
- Parameters:
tensor (torch.Tensor) – Input 1D tensor to smooth.
fill_value (Optional[float]) – Value to use if tensor is all zeros. If None, returns original tensor unchanged.
inplace (bool) – If True, modifies the tensor in place. Default False.
- Returns:
Smoothed tensor.
- Return type:
torch.Tensor
- Raises:
TypeError – If tensor is not a torch.Tensor.
ValueError – If tensor is not 1D.
- InflGame.utils.general.smoothing_zeros_batch(tensor_batch, fill_value=None, inplace=False)#
Batch version of smoothing_zeros for processing multiple 1D tensors efficiently.
This function applies zero smoothing to multiple tensors simultaneously, which is more efficient than processing them individually. It’s particularly useful for processing batches of agent trajectories or time series data.
Example:
import torch from InflGame.utils.general import smoothing_zeros_batch # Batch of 3 trajectories batch = torch.tensor([ [0, 1, 2, 0], [0, 0, 3, 0], [1, 2, 3, 4] ]) result = smoothing_zeros_batch(batch)
- Parameters:
tensor_batch (torch.Tensor) – 2D tensor where each row is a 1D tensor to smooth.
fill_value (Optional[float]) – Value to use for all-zero tensors.
inplace (bool) – If True, modifies tensors in place.
- Returns:
Batch of smoothed tensors.
- Return type:
torch.Tensor
- Raises:
TypeError – If tensor_batch is not a torch.Tensor.
- InflGame.utils.general.split_favor_bottom(num_agents, division)#
Splits a given number of agents into groups, favoring the bottom group in terms of size.
This function recursively divides the agents into smaller groups, ensuring that the bottom group (or the first group in the resulting list) has more agents when the total number of agents cannot be evenly divided. The division process continues until the specified number of divisions is reached.
Behavior: - If division is 0, the function returns a single group containing all agents. - If the number of agents is 1, the function returns a single group with one agent. - If the number of agents is even, the agents are split evenly between the bottom and top groups. - If the number of agents is odd, the bottom group gets one more agent than the top group.
Examples: - For num_agents=7 and division=2, the function will split the agents into groups like [4, 3]. - For num_agents=8 and division=3, the function will recursively split into smaller groups like [2, 2, 2, 2].
Recursive Logic:
The function uses recursion to divide the agents into smaller groups. At each step, the bottom group is determined first, and the remaining agents are split further into smaller groups.
Edge Cases: - If division=0, the function returns a single group containing all agents. - If num_agents=1, the function returns [1]. - If num_agents=2 and division=1, the function returns [1, 1].
- Parameters:
num_agents (int) – Total number of agents.
division (int) – Number of divisions.
- Returns:
List of group sizes.
- Return type:
list
- InflGame.utils.general.trust_region_learning_rate(iter, initial_lr, min_factor, decay_constant)#
Compute trust region learning rate with exponential decay.
This function implements a trust region-style learning rate that starts at an initial value and decays exponentially over time, with a minimum bound to prevent the learning rate from becoming too small.
The learning rate is computed as: η_t = η_initial × max(η_min_factor, exp(-t/τ))
- Parameters:
iter (int) – The current iteration.
initial_lr (float) – Initial learning rate.
min_factor (float) – Minimum learning rate factor (prevents learning rate from going too small).
decay_constant (float) – Decay time constant (controls how fast the learning rate decays).
- Returns:
The computed trust region learning rate.
- Return type:
float
- Raises:
ValueError – If parameters are invalid (negative values, etc.).