A CLI tool that generates tf and tfstate files based on existing infrastructure
(reverse Terraform).
- Disclaimer: This is not an official Google product
- Status: beta - we still need to improve documentation, squash some bugs, etc...
- Created by: Waze SRE
- Capabilities
- Installation
- Supported Providers
- Major Cloud
- Cloud
- Infrastructure Software
- Network
- VCS
- Monitoring & System Management
- Community
- Contributing
- Developing
- Infrastructure
- Generate
tf+tfstatefiles from existing infrastructure for all supported objects by resource. - Remote state can be uploaded to a GCS bucket.
- Connect between resources with
terraform_remote_state(local and bucket). - Save
tffiles using a custom folder tree pattern. - Import by resource name and type.
- Support terraform 0.12 (for terraform 0.11 use v0.7.9).
Terraformer uses Terraform providers and is designed to easily support newly added resources. To upgrade resources with new fields, all you need to do is upgrade the relevant Terraform providers.
Import current state to Terraform configuration from Google Cloud
Usage:
import google [flags]
import google [command]
Available Commands:
list List supported resources for google provider
Flags:
-b, --bucket string gs://terraform-state
-c, --connect (default true)
-f, --filter strings google_compute_firewall=id1:id2:id4
-h, --help help for google
-o, --path-output string (default "generated")
-p, --path-pattern string {output}/{provider}/custom/{service}/ (default "{output}/{provider}/{service}/")
--projects strings
-z, --regions strings europe-west1, (default [global])
-r, --resources strings firewalls,networks
-s, --state string local or bucket (default "local")
Use " import google [command] --help" for more information about a command.
Read-only permissions
Filters are a way to choose which resources terraformer imports. It's possible to filter resources by its identifiers or attributes. Multiple filtering values are separated by :. If an identifier contains this symbol, value should be wrapped in ' e.g. --filter=resource=id1:'project:dataset_id'. Identifier based filters will be executed before Terraformer will try to refresh remote state.
Filtering is based on Terraform resource ID patterns. To find valid ID patterns for your resource, check the import part of the Terraform documentation.
Example usage:
terraformer import aws --resources=vpc,subnet --filter=aws_vpc=myvpcid --regions=eu-west-1
Will only import the vpc with id myvpcid. This form of filters can help when it's necessary to select resources by its identifiers.
The plan command generates a planfile that contains all the resources set to be imported. By modifying the planfile before running the import command, you can rename or filter the resources you'd like to import.
The rest of subcommands and parameters are identical to the import command.
$ terraformer plan google --resources=networks,firewalls --projects=my-project --zone=europe-west1-d
(snip)
Saving planfile to generated/google/my-project/terraformer/plan.json
After reviewing/customizing the planfile, begin the import by running import plan.
$ terraformer import plan generated/google/my-project/terraformer/plan.json
From source:
- Run
git clone <terraformer repo> - Run
GO111MODULE=on go mod vendor - Run
go build -vfor all providers OR build with one providergo run build/main.go {google,aws,azure,kubernetes and etc} - Run
terraform initagainst aninit.tffile to install the plugins required for your platform. For example, if you need plugins for the google provider,init.tfshould contain:
provider "google" {}
Or alternatively
- Copy your Terraform provider's plugin(s) to folder
~/.terraform.d/plugins/{darwin,linux}_amd64/, as appropriate.
From Releases:
- Linux
export PROVIDER={all,google,aws,kubernetes}
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-$(PROVIDER)-linux-amd64
chmod +x terraformer-${PROVIDER}-linux-amd64
sudo mv terraformer-${PROVIDER}-linux-amd64 /usr/local/bin/terraformer
- MacOS
export PROVIDER={all,google,aws,kubernetes}
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64
chmod +x terraformer-${PROVIDER}-darwin-amd64
sudo mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer
If you want to use a package manager:
- Homebrew users can use
brew install terraformer.
Links to download Terraform Providers:
- Major Cloud
- Cloud
- Infrastructure Software
- Kubernetes provider >=1.9.0 - here
- Network
- Cloudflare provider >1.16 - here
- VCS
- GitHub provider >=2.2.1 - here
- Monitoring & System Management
- Community
- Logz.io provider >=1.1.1 - here
Information on provider plugins: https://www.terraform.io/docs/configuration/providers.html
Example:
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --connect=true --regions=europe-west1,europe-west4 --projects=aaa,fff
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --filter=google_compute_firewall=rule1:rule2:rule3 --regions=europe-west1 --projects=aaa,fff
List of supported GCP services:
addressesgoogle_compute_address
autoscalersgoogle_compute_autoscaler
backendBucketsgoogle_compute_backend_bucket
backendServicesgoogle_compute_backend_service
bigQuerygoogle_bigquery_datasetgoogle_bigquery_table
cloudFunctionsgoogle_cloudfunctions_function
cloudsqlgoogle_sql_database_instancegoogle_sql_database
dataProcgoogle_dataproc_cluster
disksgoogle_compute_disk
dnsgoogle_dns_managed_zonegoogle_dns_record_set
firewallsgoogle_compute_firewall
forwardingRulesgoogle_compute_forwarding_rule
gcsgoogle_storage_bucketgoogle_storage_bucket_aclgoogle_storage_default_object_aclgoogle_storage_bucket_iam_bindinggoogle_storage_bucket_iam_membergoogle_storage_bucket_iam_policygoogle_storage_notification
gkegoogle_container_clustergoogle_container_node_pool
globalAddressesgoogle_compute_global_address
globalForwardingRulesgoogle_compute_global_forwarding_rule
healthChecksgoogle_compute_health_check
httpHealthChecksgoogle_compute_http_health_check
httpsHealthChecksgoogle_compute_https_health_check
imagesgoogle_compute_image
instanceGroupManagersgoogle_compute_instance_group_manager
instanceGroupsgoogle_compute_instance_group
instanceTemplatesgoogle_compute_instance_template
instancesgoogle_compute_instance
interconnectAttachmentsgoogle_compute_interconnect_attachment
kmsgoogle_kms_key_ringgoogle_kms_crypto_key
logginggoogle_logging_metric
memoryStoregoogle_redis_instance
monitoringgoogle_monitoring_alert_policygoogle_monitoring_groupgoogle_monitoring_notification_channelgoogle_monitoring_uptime_check_config
networksgoogle_compute_network
nodeGroupsgoogle_compute_node_group
nodeTemplatesgoogle_compute_node_template
projectgoogle_project
pubsubgoogle_pubsub_subscriptiongoogle_pubsub_topic
regionAutoscalersgoogle_compute_region_autoscaler
regionBackendServicesgoogle_compute_region_backend_service
regionDisksgoogle_compute_region_disk
regionInstanceGroupManagersgoogle_compute_region_instance_group_manager
routersgoogle_compute_router
routesgoogle_compute_route
schedulerJobsgoogle_cloud_scheduler_job
securityPoliciesgoogle_compute_security_policy
sslPoliciesgoogle_compute_ssl_policy
subnetworksgoogle_compute_subnetwork
targetHttpProxiesgoogle_compute_target_http_proxy
targetHttpsProxiesgoogle_compute_target_https_proxy
targetInstancesgoogle_compute_target_instance
targetPoolsgoogle_compute_target_pool
targetSslProxiesgoogle_compute_target_ssl_proxy
targetTcpProxiesgoogle_compute_target_tcp_proxy
targetVpnGatewaysgoogle_compute_vpn_gateway
urlMapsgoogle_compute_url_map
vpnTunnelsgoogle_compute_vpn_tunnel
Your tf and tfstate files are written by default to
generated/gcp/zone/service.
Example:
terraformer import aws --resources=vpc,subnet --connect=true --regions=eu-west-1 --profile=prod
terraformer import aws --resources=vpc,subnet --filter=aws_vpc=vpc_id1:vpc_id2:vpc_id3 --regions=eu-west-1
To load profiles from the shared AWS configuration file (typically ~/.aws/config), set the AWS_SDK_LOAD_CONFIG to true:
AWS_SDK_LOAD_CONFIG=true terraformer import aws --resources=vpc,subnet --regions=eu-west-1 --profile=prod
You can also provide no regions when importing resources:
terraformer import aws --resources=cloudfront --profile=prod
In that case terraformer will not know with which region resources are associated with and will not assume any region. That scenario is useful in case of global resources (e.g. CloudFront distributions or Route 53 records) and when region is passed implicitly through environmental variables or metadata service.
acmaws_acm_certificate
alb(supports ALB and NLB)aws_lbaws_lb_listeneraws_lb_listener_ruleaws_lb_listener_certificateaws_lb_target_groupaws_lb_target_group_attachment
auto_scalingaws_autoscaling_groupaws_launch_configurationaws_launch_template
budgetsaws_budgets_budget
cloudfrontaws_cloudfront_distribution
cloudformationaws_cloudformation_stackaws_cloudformation_stack_setaws_cloudformation_stack_set_instance
cloudtrailaws_cloudtrail
dynamodbaws_dynamodb_table
ec2_instanceaws_instance
eipaws_eip
elasticacheaws_elasticache_clusteraws_elasticache_parameter_groupaws_elasticache_subnet_groupaws_elasticache_replication_group
ebsaws_ebs_volumeaws_volume_attachment
ecsaws_ecs_clusteraws_ecs_serviceaws_ecs_task_definition
eksaws_eks_cluster
elbaws_elb
esaws_elasticsearch_domain
firehoseaws_kinesis_firehose_delivery_stream
glueglue_crawler
iamaws_iam_roleaws_iam_role_policyaws_iam_useraws_iam_user_group_membershipaws_iam_user_policyaws_iam_policy_attachmentaws_iam_policyaws_iam_groupaws_iam_group_membershipaws_iam_group_policy
igwaws_internet_gateway
kinesisaws_kinesis_stream
mskaws_msk_cluster
nataws_nat_gateway
naclaws_network_acl
organizationaws_organizations_accountaws_organizations_organizationaws_organizations_organizational_unitaws_organizations_policyaws_organizations_policy_attachment
rdsaws_db_instanceaws_db_parameter_groupaws_db_subnet_groupaws_db_option_groupaws_db_event_subscription
route53aws_route53_zoneaws_route53_record
route_tableaws_route_table
s3aws_s3_bucketaws_s3_bucket_policy
sgaws_security_group
snsaws_sns_topicaws_sns_topic_subscription
sqsaws_sqs_queue
subnetaws_subnet
vpcaws_vpc
vpc_peeringaws_vpc_peering_connection
vpn_connectionaws_vpn_connection
vpn_gatewayaws_vpn_gateway
AWS services that are global will be imported without specified region even if several regions will be passed. It is to ensure only one representation of an AWS resource is imported.
List of global AWS services:
budgetscloudfrontiamorganizationroute53
Attribute filters allow filtering across different resource types by its attributes.
terraformer import aws --resources=ec2_instance,ebs --filter=Name=tags.costCenter;Value=20000:'20001:1' --regions=eu-west-1
Will only import AWS EC2 instances along with EBS volumes annotated with tag costCenter with values 20000 or 20001:1. Attribute filters are by default applicable to all resource types although it's possible to specify to what resource type a given filter should be applicable to by providing Type=<type> parameter. For example:
terraformer import aws --resources=ec2_instance,ebs --filter=Type=ec2_instance;Name=tags.costCenter;Value=20000:'20001:1' --regions=eu-west-1
Will work as same as example above with a change the filter will be applicable only to ec2_instance resources.
Example:
export ARM_CLIENT_ID=[CLIENT_ID]
export ARM_CLIENT_SECRET=[CLIENT_SECRET]
export ARM_SUBSCRIPTION_ID=[SUBSCRIPTION_ID]
export ARM_TENANT_ID=[TENANT_ID]
export AZURE_CLIENT_ID=[CLIENT_ID]
export AZURE_CLIENT_SECRET=[CLIENT_SECRET]
export AZURE_TENANT_ID=[TENANT_ID]
./terraformer import azure -r resource_group
List of supported Azure resources:
diskazurerm_managed_disk
network_interfaceazurerm_network_interface
network_security_groupazurerm_network_security_group
resource_groupazurerm_resource_group
storage_accountazurerm_storage_account
virtual_machineazurerm_virtual_machine
virtual_networkazurerm_virtual_network
You can either edit your alicloud config directly, (usually it is ~/.aliyun/config.json)
or run aliyun configure and enter the credentials when prompted.
Terraformer will pick up the profile name specified in the --profile parameter.
It defaults to the first config in the config array.
terraformer import alicloud --resources=ecs --regions=ap-southeast-3 --profile=defaultFor all supported resources, you can do
# https://unix.stackexchange.com/a/114948/203870
export ALL_SUPPORTED_ALICLOUD_RESOURCES=$(terraformer import alicloud list | sed -e 'H;1h;$!d;x;y/\n/,/')
terraformer import alicloud --resources=$ALL_SUPPORTED_ALICLOUD_RESOURCES --regions=ap-southeast-3List of supported AliCloud resources:
dnsalicloud_dnsalicloud_dns_record
ecsalicloud_instance
keypairalicloud_key_pair
natalicloud_nat_gateway
pvtzalicloud_pvtz_zonealicloud_pvtz_zone_attachmentalicloud_pvtz_zone_record
ramalicloud_ram_rolealicloud_ram_role_policy_attachment
rdsalicloud_db_instance
sgalicloud_security_groupalicloud_security_group_rule
slbalicloud_slbalicloud_slb_server_group
vpcalicloud_vpc
vswitchalicloud_vswitch
Example:
export DIGITALOCEAN_TOKEN=[DIGITALOCEAN_TOKEN]
./terraformer import digitalocean -r project,droplet
List of supported DigitalOcean resources:
cdndigitalocean_cdn
database_clusterdigitalocean_database_cluster
domaindigitalocean_domain
dropletdigitalocean_droplet
droplet_snapshotdigitalocean_droplet_snapshot
firewalldigitalocean_firewall
floating_ipdigitalocean_floating_ip
kubernetes_clusterdigitalocean_kubernetes_cluster
loadbalancerdigitalocean_loadbalancer
projectdigitalocean_project
ssh_keydigitalocean_ssh_key
tagdigitalocean_tag
volumedigitalocean_volume
volume_snapshotdigitalocean_volume_snapshot
Example:
export HEROKU_EMAIL=[HEROKU_EMAIL]
export HEROKU_API_KEY=[HEROKU_API_KEY]
./terraformer import heroku -r app,addon
List of supported Heroku resources:
account_featureheroku_account_feature
addonheroku_addon
addon_attachmentheroku_addon_attachment
appheroku_app
app_config_associationheroku_app_config_association
app_featureheroku_app_feature
app_webhookheroku_app_webhook
buildheroku_build
domainheroku_domain
drainheroku_drain
formationheroku_formation
pipelineheroku_pipeline
pipeline_couplingheroku_pipeline_coupling
team_collaboratorheroku_team_collaborator
team_memberheroku_team_member
Example:
export LINODE_TOKEN=[LINODE_TOKEN]
./terraformer import linode -r instance
List of supported Linode resources:
domainlinode_domain
imagelinode_image
instancelinode_instance
nodebalancerlinode_nodebalancer
sshkeylinode_sshkey
tokenlinode_token
volumelinode_volume
Example:
terraformer import openstack --resources=compute,networking --regions=RegionOne
List of supported OpenStack services:
blockstorageopenstack_blockstorage_volume_v1openstack_blockstorage_volume_v2openstack_blockstorage_volume_v3
computeopenstack_compute_instance_v2
networkingopenstack_networking_secgroup_v2openstack_networking_secgroup_rule_v2
Example:
terraformer import kubernetes --resources=deployments,services,storageclasses
terraformer import kubernetes --resources=deployments,services,storageclasses --filter=kubernetes_deployment=name1:name2:name3
All Kubernetes resources that are currently supported by the Kubernetes provider, are also supported by this module. Here is the list of resources which are currently supported by Kubernetes provider v.1.4:
clusterrolebindingkubernetes_cluster_role_binding
configmapskubernetes_config_map
deploymentskubernetes_deployment
horizontalpodautoscalerskubernetes_horizontal_pod_autoscaler
limitrangeskubernetes_limit_range
namespaceskubernetes_namespace
persistentvolumeskubernetes_persistent_volume
persistentvolumeclaimskubernetes_persistent_volume_claim
podskubernetes_pod
replicationcontrollerskubernetes_replication_controller
resourcequotaskubernetes_resource_quota
secretskubernetes_secret
serviceskubernetes_service
serviceaccountskubernetes_service_account
statefulsetskubernetes_stateful_set
storageclasseskubernetes_storage_class
- Terraform Kubernetes provider is rejecting resources with ":" characters in their names (as they don't meet DNS-1123), while it's allowed for certain types in Kubernetes, e.g. ClusterRoleBinding.
- Because Terraform flatmap uses "." to detect the keys for unflattening the maps, some keys with "." in their names are being considered as the maps.
- Since the library assumes empty strings to be empty values (not "0"), there are some issues with optional integer keys that are restricted to be positive.
Example:
CLOUDFLARE_TOKEN=[CLOUDFLARE_API_TOKEN]
CLOUDFLARE_EMAIL=[CLOUDFLARE_EMAIL]
./terraformer import cloudflare --resources=firewall,dns
List of supported Cloudflare services:
accesscloudflare_access_application
dnscloudflare_zonecloudflare_record
firewallcloudflare_access_rulecloudflare_filtercloudflare_firewall_rulecloudflare_zone_lockdown
Example:
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --token=YOUR_TOKEN // or GITHUB_TOKEN in env
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --filter=github_repository=id1:id2:id4 --token=YOUR_TOKEN // or GITHUB_TOKEN in env
Supports only organizational resources. List of supported resources:
membersgithub_membership
organization_webhooksgithub_organization_webhook
repositoriesgithub_repositorygithub_repository_webhookgithub_branch_protectiongithub_repository_collaboratorgithub_repository_deploy_key
teamsgithub_teamgithub_team_membershipgithub_team_repository
Notes:
- Terraformer can't get webhook secrets from the GitHub API. If you use a secret token in any of your webhooks, running
terraform planwill result in a change being detected: =>configuration.#: "1" => "0"in tfstate only.
Example:
./terraformer import datadog --resources=monitor --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env
./terraformer import datadog --resources=monitor --filter=datadog_monitor=id1:id2:id4 --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env
List of supported Datadog services:
dashboarddatadog_dashboard
downtimedatadog_downtime
monitordatadog_monitor
screenboarddatadog_screenboard
syntheticsdatadog_synthetics_test
timeboarddatadog_timeboard
userdatadog_user
Example:
NEWRELIC_API_KEY=[API-KEY]
./terraformer import newrelic -r alert,dashboard,infra,synthetics
List of supported New Relic resources:
alertnewrelic_alert_channelnewrelic_alert_conditionnewrelic_alert_policy
dashboardnewrelic_dashboard
infranewrelic_infra_alert_condition
syntheticsnewrelic_synthetics_monitornewrelic_synthetics_alert_condition
Example:
LOGZIO_API_TOKEN=foobar LOGZIO_BASE_URL=https://api-eu.logz.io ./terraformer import logzio -r=alerts,alert_notification_endpoints // Import Logz.io alerts and alert notification endpoints
List of supported Logz.io resources:
alertslogzio_alert
alert_notification_endpointslogzio_endpoint
If you have improvements or fixes, we would love to have your contributions. Please read CONTRIBUTING.md for more information on the process we would like contributors to follow.
Terraformer was built so you can easily add new providers of any kind.
Process for generating tf + tfstate files:
- Call GCP/AWS/other api and get list of resources.
- Iterate over resources and take only the ID (we don't need mapping fields!).
- Call to provider for readonly fields.
- Call to infrastructure and take tf + tfstate.
- Call to provider using the refresh method and get all data.
- Convert refresh data to go struct.
- Generate HCL file -
tffiles. - Generate
tfstatefiles.
All mapping of resource is made by providers and Terraform. Upgrades are needed only for providers.
For GCP compute resources, use generated code from
providers/gcp/gcp_compute_code_generator.
To regenerate code:
go run providers/gcp/gcp_compute_code_generator/*.go
- Simpler to add new providers and resources - already supports AWS, GCP, GitHub, Kubernetes, and Openstack. Terraforming supports only AWS.
- Better support for HCL + tfstate, including updates for Terraform 0.12.
- If a provider adds new attributes to a resource, there is no need change Terraformer code - just update the Terraform provider on your laptop.
- Automatically supports connections between resources in HCL files.
Terraforming gets all attributes from cloud APIs and creates HCL and tfstate files with templating. Each attribute in the API needs to map to attribute in Terraform. Generated files from templating can be broken with illegal syntax. When a provider adds new attributes the terraforming code needs to be updated.
Terraformer instead uses Terraform provider files for mapping attributes, HCL library from Hashicorp, and Terraform code.
Look for S3 support in terraforming here and official S3 support Terraforming lacks full coverage for resources - as an example you can see that 70% of S3 options are not supported:
- terraforming - https://github.com/dtan4/terraforming/blob/master/lib/terraforming/template/tf/s3.erb
- official S3 support - https://www.terraform.io/docs/providers/aws/r/s3_bucket.html