Azure OpenAI Terraform Module and Samples
Name | Version |
---|---|
terraform | >= 1.3.0 |
azurerm | ~> 3.80 |
modtm | >= 0.1.8, < 1.0 |
random | >= 3.0 |
Name | Version |
---|---|
azurerm | ~> 3.80 |
modtm | >= 0.1.8, < 1.0 |
random | >= 3.0 |
No modules.
Name | Type |
---|---|
azurerm_cognitive_account.this | resource |
azurerm_cognitive_deployment.this | resource |
azurerm_monitor_diagnostic_setting.setting | resource |
azurerm_private_dns_zone.dns_zone | resource |
azurerm_private_dns_zone_virtual_network_link.dns_zone_link | resource |
azurerm_private_endpoint.this | resource |
modtm_telemetry.this | resource |
random_integer.this | resource |
azurerm_private_dns_zone.dns_zone | data source |
azurerm_resource_group.pe_vnet_rg | data source |
azurerm_resource_group.this | data source |
azurerm_subnet.pe_subnet | data source |
azurerm_virtual_network.vnet | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
account_name | Specifies the name of the Cognitive Service Account. Changing this forces a new resource to be created. Leave this variable as default would use a default name with random suffix. | string |
"" |
no |
application_name | Name of the application. A corresponding tag would be created on the created resources if var.default_tags_enabled is true . |
string |
"" |
no |
custom_subdomain_name | The subdomain name used for token-based authentication. Changing this forces a new resource to be created. Leave this variable as default would use a default name with random suffix. | string |
"" |
no |
customer_managed_key | type = object({ key_vault_key_id = (Required) The ID of the Key Vault Key which should be used to Encrypt the data in this OpenAI Account. identity_client_id = (Optional) The Client ID of the User Assigned Identity that has access to the key. This property only needs to be specified when there're multiple identities attached to the OpenAI Account. }) |
object({ |
null |
no |
default_tags_enabled | Determines whether or not default tags are applied to resources. If set to true, tags will be applied. If set to false, tags will not be applied. | bool |
false |
no |
deployment | type = map(object({ name = (Required) The name of the Cognitive Services Account Deployment. Changing this forces a new resource to be created. cognitive_account_id = (Required) The ID of the Cognitive Services Account. Changing this forces a new resource to be created. model = { model_format = (Required) The format of the Cognitive Services Account Deployment model. Changing this forces a new resource to be created. Possible value is OpenAI. model_name = (Required) The name of the Cognitive Services Account Deployment model. Changing this forces a new resource to be created. model_version = (Required) The version of Cognitive Services Account Deployment model. } scale = { scale_type = (Required) Deployment scale type. Possible value is Standard. Changing this forces a new resource to be created. } rai_policy_name = (Optional) The name of RAI policy. Changing this forces a new resource to be created. capacity = (Optional) Tokens-per-Minute (TPM). The unit of measure for this field is in the thousands of Tokens-per-Minute. Defaults to 1 which means that the limitation is 1000 tokens per minute. version_upgrade_option = (Optional) Deployment model version upgrade option. Possible values are OnceNewDefaultVersionAvailable , OnceCurrentVersionExpired , and NoAutoUpgrade . Defaults to OnceNewDefaultVersionAvailable . Changing this forces a new resource to be created.})) |
map(object({ |
{} |
no |
diagnostic_setting | A map of objects that represent the configuration for a diagnostic setting." type = map(object({ name = (Required) Specifies the name of the diagnostic setting. Changing this forces a new resource to be created. log_analytics_workspace_id = (Optional) (Optional) Specifies the resource id of an Azure Log Analytics workspace where diagnostics data should be sent. log_analytics_destination_type = (Optional) Possible values are AzureDiagnostics and Dedicated . When set to Dedicated, logs sent to a Log Analytics workspace will go into resource specific tables, instead of the legacy AzureDiagnostics table.eventhub_name = (Optional) Specifies the name of the Event Hub where diagnostics data should be sent. eventhub_authorization_rule_id = (Optional) Specifies the resource id of an Event Hub Namespace Authorization Rule used to send diagnostics data. storage_account_id = (Optional) Specifies the resource id of an Azure storage account where diagnostics data should be sent. partner_solution_id = (Optional) The resource id of the market partner solution where diagnostics data should be sent. For potential partner integrations, click to learn more about partner integration. audit_log_retention_policy = (Optional) Specifies the retention policy for the audit log. This is a block with the following properties: enabled = (Optional) Specifies whether the retention policy is enabled. If enabled, days must be a positive number.days = (Optional) Specifies the number of days to retain trace logs. If enabled is set to true , this value must be set to a positive number.request_response_log_retention_policy = (Optional) Specifies the retention policy for the request response log. This is a block with the following properties: enabled = (Optional) Specifies whether the retention policy is enabled. If enabled, days must be a positive number.days = (Optional) Specifies the number of days to retain trace logs. If enabled is set to true , this value must be set to a positive number.trace_log_retention_policy = (Optional) Specifies the retention policy for the trace log. This is a block with the following properties: enabled = (Optional) Specifies whether the retention policy is enabled. If enabled, days must be a positive number.days = (Optional) Specifies the number of days to retain trace logs. If enabled is set to true , this value must be set to a positive number.metric_retention_policy = (Optional) Specifies the retention policy for the metric. This is a block with the following properties: enabled = (Optional) Specifies whether the retention policy is enabled. If enabled, days must be a positive number.days = (Optional) Specifies the number of days to retain trace logs. If enabled is set to true , this value must be set to a positive number.})) |
map(object({ |
{} |
no |
dynamic_throttling_enabled | Determines whether or not dynamic throttling is enabled. If set to true , dynamic throttling will be enabled. If set to false , dynamic throttling will not be enabled. |
bool |
null |
no |
environment | Environment of the application. A corresponding tag would be created on the created resources if var.default_tags_enabled is true . |
string |
"" |
no |
fqdns | List of FQDNs allowed for the Cognitive Account. | list(string) |
null |
no |
identity | type = object({ type = (Required) The type of the Identity. Possible values are SystemAssigned , UserAssigned , SystemAssigned, UserAssigned .identity_ids = (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this OpenAI Account. }) |
object({ |
null |
no |
local_auth_enabled | Whether local authentication methods is enabled for the Cognitive Account. Defaults to true . |
bool |
true |
no |
location | Azure OpenAI deployment region. Set this variable to null would use resource group's location. |
string |
n/a | yes |
network_acls | type = set(object({ default_action = (Required) The Default Action to use when no rules match from ip_rules / virtual_network_rules. Possible values are Allow and Deny .ip_rules = (Optional) One or more IP Addresses, or CIDR Blocks which should be able to access the Cognitive Account. virtual_network_rules = optional(set(object({ subnet_id = (Required) The ID of a Subnet which should be able to access the OpenAI Account. ignore_missing_vnet_service_endpoint = (Optional) Whether ignore missing vnet service endpoint or not. Default to false .}))) })) |
set(object({ |
null |
no |
outbound_network_access_restricted | Whether outbound network access is restricted for the Cognitive Account. Defaults to false . |
bool |
false |
no |
pe_subresource | A list of subresource names which the Private Endpoint is able to connect to. subresource_names corresponds to group_id . Possible values are detailed in the product documentation in the Subresources column. Changing this forces a new resource to be created. |
list(string) |
[ |
no |
private_dns_zone | A map of object that represents the existing Private DNS Zone you'd like to use. Leave this variable as default would create a new Private DNS Zone. type = object({ name = "(Required) The name of the Private DNS Zone." resource_group_name = "(Optional) The Name of the Resource Group where the Private DNS Zone exists. If the Name of the Resource Group is not provided, the first Private DNS Zone from the list of Private DNS Zones in your subscription that matches name will be returned."} |
object({ |
null |
no |
private_endpoint | A map of objects that represent the configuration for a private endpoint." type = map(object({ name = (Required) Specifies the Name of the Private Endpoint. Changing this forces a new resource to be created. vnet_rg_name = (Required) Specifies the name of the Resource Group where the Private Endpoint's Virtual Network Subnet exists. Changing this forces a new resource to be created. vnet_name = (Required) Specifies the name of the Virtual Network where the Private Endpoint's Subnet exists. Changing this forces a new resource to be created. subnet_name = (Required) Specifies the name of the Subnet which Private IP Addresses will be allocated for this Private Endpoint. Changing this forces a new resource to be created. dns_zone_virtual_network_link_name = (Optional) The name of the Private DNS Zone Virtual Network Link. Changing this forces a new resource to be created. Default to dns_zone_link .private_dns_entry_enabled = (Optional) Whether or not to create a private_dns_zone_group block for the Private Endpoint. Default to false .private_service_connection_name = (Optional) Specifies the Name of the Private Service Connection. Changing this forces a new resource to be created. Default to privateserviceconnection .is_manual_connection = (Optional) Does the Private Endpoint require Manual Approval from the remote resource owner? Changing this forces a new resource to be created. Default to false .})) |
map(object({ |
{} |
no |
public_network_access_enabled | Whether public network access is allowed for the Cognitive Account. Defaults to false . |
bool |
false |
no |
resource_group_name | Name of the azure resource group to use. The resource group must exist. | string |
n/a | yes |
sku_name | Specifies the SKU Name for this Cognitive Service Account. Possible values are F0 , F1 , S0 , S , S1 , S2 , S3 , S4 , S5 , S6 , P0 , P1 , P2 , E0 and DC0 . Default to S0 . |
string |
"S0" |
no |
tags | (Optional) A mapping of tags to assign to the resource. | map(string) |
{} |
no |
tracing_tags_enabled | Whether enable tracing tags that generated by BridgeCrew Yor. | bool |
false |
no |
tracing_tags_prefix | Default prefix for generated tracing tags | string |
"avm_" |
no |
Name | Description |
---|---|
openai_endpoint | The endpoint used to connect to the Cognitive Service Account. |
openai_id | The ID of the Cognitive Service Account. |
openai_primary_key | The primary access key for the Cognitive Service Account. |
openai_secondary_key | The secondary access key for the Cognitive Service Account. |
openai_subdomain | The subdomain used to connect to the Cognitive Service Account. |
private_ip_addresses | A map dictionary of the private IP addresses for each private endpoint. |
Before submitting a pull request, please make sure the following is done:
We provide a docker image to run the pre-commit checks and tests for you: mcr.microsoft.com/azterraform:latest
To run the pre-commit task, we can run the following command:
docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
In pre-commit task, we will:
- Run
terraform fmt -recursive
command for your Terraform code. - Run
terrafmt fmt -f
command for markdown files and go code files to ensure that the Terraform code embedded in these files are well formatted. - Run
go mod tidy
andgo mod vendor
for test folder to ensure that all the dependencies have been synced. - Run
gofmt
for all go code files. - Run
gofumpt
for all go code files. - Run
terraform-docs
onREADME.md
file, then runmarkdown-table-formatter
to format markdown tables inREADME.md
.
Then we can run the pr-check task to check whether our code meets our pipeline's requirement (We strongly recommend you run the following command before you commit):
docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
To run the e2e-test, we can run the following command:
docker run --rm -v $(pwd):/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
We're using BridgeCrew Yor and yorbox to help manage tags consistently across infrastructure as code (IaC) frameworks. In this module you might see tags like:
resource "azurerm_resource_group" "rg" {
location = "eastus"
name = random_pet.name
tags = merge(var.tags, (/*<box>*/ (var.tracing_tags_enabled ? { for k, v in /*</box>*/ {
avm_git_commit = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
avm_git_file = "main.tf"
avm_git_last_modified_at = "2023-05-05 08:57:54"
avm_git_org = "lonegunmanb"
avm_git_repo = "terraform-yor-tag-test-module"
avm_yor_trace = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
} /*<box>*/ : replace(k, "avm_", var.tracing_tags_prefix) => v } : {}) /*</box>*/))
}
To enable tracing tags, set the variable to true:
module "example" {
source = "{module_source}"
...
tracing_tags_enabled = true
}
The tracing_tags_enabled
is default to false
.
To customize the prefix for your tracing tags, set the tracing_tags_prefix
variable value in your Terraform configuration:
module "example" {
source = "{module_source}"
...
tracing_tags_prefix = "custom_prefix_"
}
The actual applied tags would be:
{
custom_prefix_git_commit = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
custom_prefix_git_file = "main.tf"
custom_prefix_git_last_modified_at = "2023-05-05 08:57:54"
custom_prefix_git_org = "lonegunmanb"
custom_prefix_git_repo = "terraform-yor-tag-test-module"
custom_prefix_yor_trace = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
}
This module uses terraform-provider-modtm to collect telemetry data. This provider is designed to assist with tracking the usage of Terraform modules. It creates a custom modtm_telemetry
resource that gathers and sends telemetry data to a specified endpoint. The aim is to provide visibility into the lifecycle of your Terraform modules - whether they are being created, updated, or deleted. This data can be invaluable in understanding the usage patterns of your modules, identifying popular modules, and recognizing those that are no longer in use.
The ModTM provider is designed with respect for data privacy and control. The only data collected and transmitted are the tags you define in module's modtm_telemetry
resource, an uuid which represents a module instance's identifier, and the operation the module's caller is executing (Create/Update/Delete/Read). No other data from your Terraform modules or your environment is collected or transmitted.
One of the primary design principles of the ModTM provider is its non-blocking nature. The provider is designed to work in a way that any network disconnectedness or errors during the telemetry data sending process will not cause a Terraform error or interrupt your Terraform operations. This makes the ModTM provider safe to use even in network-restricted or air-gaped environments.
If the telemetry data cannot be sent due to network issues, the failure will be logged, but it will not affect the Terraform operation in progress(it might delay your operations for no more than 5 seconds). This ensures that your Terraform operations always run smoothly and without interruptions, regardless of the network conditions.
You can turn off the telemetry collection by declaring the following provider
block in your root module:
provider "modtm" {
enabled = false
}