CLI tool to generate tf and tfstate files from existing infrastructure
(reverse Terraform).
- Disclaimer: This is not an official Google product.
- Status: beta - need improve documentations, bugs etc..
- Created by: Waze SRE.
- Generate
tf+tfstatefiles from existing infrastructure for all supported objects by resource. - Remote state can be uploaded to a GCS bucket.
- Connect between resources with
terraform_remote_state(local and bucket). - Compatible with terraform 0.12 syntax.
- Save
tffiles with custom folder tree pattern. - Import by resource name and type.
Terraformer use terraform providers and built for easy to add new supported resources. For upgrade resources with new fields you need upgrade only terraform providers.
Import current State to terraform configuration from google cloud
Usage:
import google [flags]
import google [command]
Available Commands:
list List supported resources for google provider
Flags:
-b, --bucket string gs://terraform-state
-c, --connect (default true)
-f, --filter strings google_compute_firewall=id1:id2:id4
-h, --help help for google
-o, --path-output string (default "generated")
-p, --path-pattern string {output}/{provider}/custom/{service}/ (default "{output}/{provider}/{service}/")
--projects strings
-r, --resources strings firewalls,networks
-s, --state string local or bucket (default "local")
-z, --zone string
Readonly permissions
Filters are a way to choose which resources terraformer import.
Ex:
terraformer import aws --resources=vpc,subnet --filter=aws_vpc=myvpcid --regions=eu-west-1
will import only one VPC and not only subnets from this VPC but all subnets from all VPCs
Filtering is based on Terraform resource ID pattern. This way, it may differ from the value your provider give you. Check the import part of Terraform documentation for your resource for valid ID pattern.
From source:
- Run
git clone <terraformer repo> - Run
GO111MODULE=on go mod vendor - Run
go build -v - Run
terraform initagainst aninit.tfinstalling the platform correct required plugin(s) relative to current directory. Exampleinit.tfto install the Google cloud plugin:
provider "google" {}
Or alternatively
- Copy your Terraform provider's plugin(s) to folder
~/.terraform.d/plugins/{darwin,linux}_amd64/, as appropriate.
From Releases:
- Linux
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-linux-amd64
chmod +x terraformer-linux-amd64
sudo mv terraformer-linux-amd64 /usr/local/bin/terraformer
- MacOS
curl -LO https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-darwin-amd64
chmod +x terraformer-darwin-amd64
sudo mv terraformer-darwin-amd64 /usr/local/bin/terraformer
Links for download terraform providers:
- google cloud provider >2.0.0 - here
- aws provider >1.56.0 - here
- openstack provider >1.17.0 - here
- kubernetes provider >=1.4.0 - here
- github provider >=2.0.0 - here
- datadog provider >1.19.0 - here
Information on provider plugins: https://www.terraform.io/docs/configuration/providers.html
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --connect=true --zone=europe-west1-a --projects=aaa,fff
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --filter=google_compute_firewall=rule1:rule2:rule3 --zone=europe-west1-a --projects=aaa,fff
List of supported GCP services:
addressesgoogle_compute_address
autoscalersgoogle_compute_autoscaler
backendBucketsgoogle_compute_backend_bucket
backendServicesgoogle_compute_backend_service
bigQuerygoogle_bigquery_datasetgoogle_bigquery_table
schedulerJobsgoogle_cloud_scheduler_job
disksgoogle_compute_disk
firewallsgoogle_compute_firewall
forwardingRulesgoogle_compute_forwarding_rule
globalAddressesgoogle_compute_global_address
globalForwardingRulesgoogle_compute_global_forwarding_rule
healthChecksgoogle_compute_health_check
httpHealthChecksgoogle_compute_http_health_check
httpsHealthChecksgoogle_compute_https_health_check
imagesgoogle_compute_image
instanceGroupManagersgoogle_compute_instance_group_manager
instanceGroupsgoogle_compute_instance_group
instanceTemplatesgoogle_compute_instance_template
instancesgoogle_compute_instance
interconnectAttachmentsgoogle_compute_interconnect_attachment
memoryStoregoogle_redis_instance
networksgoogle_compute_network
nodeGroupsgoogle_compute_node_group
nodeTemplatesgoogle_compute_node_template
regionAutoscalersgoogle_compute_region_autoscaler
regionBackendServicesgoogle_compute_region_backend_service
regionDisksgoogle_compute_region_disk
regionInstanceGroupManagersgoogle_compute_region_instance_group_manager
routersgoogle_compute_router
routesgoogle_compute_route
securityPoliciesgoogle_compute_security_policy
sslPoliciesgoogle_compute_ssl_policy
subnetworksgoogle_compute_subnetwork
targetHttpProxiesgoogle_compute_target_http_proxy
targetHttpsProxiesgoogle_compute_target_https_proxy
targetInstancesgoogle_compute_target_instance
targetPoolsgoogle_compute_target_pool
targetSslProxiesgoogle_compute_target_ssl_proxy
targetTcpProxiesgoogle_compute_target_tcp_proxy
targetVpnGatewaysgoogle_compute_vpn_gateway
urlMapsgoogle_compute_url_map
vpnTunnelsgoogle_compute_vpn_tunnel
gkegoogle_container_clustergoogle_container_node_pool
pubsubgoogle_pubsub_subscriptiongoogle_pubsub_topic
dataProcgoogle_dataproc_cluster
cloudFunctionsgoogle_cloudfunctions_function
gcsgoogle_storage_bucketgoogle_storage_bucket_aclgoogle_storage_default_object_aclgoogle_storage_bucket_iam_bindinggoogle_storage_bucket_iam_membergoogle_storage_bucket_iam_policygoogle_storage_notification
monitoringgoogle_monitoring_alert_policygoogle_monitoring_groupgoogle_monitoring_notification_channelgoogle_monitoring_uptime_check_config
dnsgoogle_dns_managed_zonegoogle_dns_record_set
cloudsqlgoogle_sql_database_instancegoogle_sql_database
kmsgoogle_kms_key_ringgoogle_kms_crypto_key
projectgoogle_project
Your tf and tfstate files are written by default to
generated/gcp/zone/service.
Example:
terraformer import aws --resources=vpc,subnet --connect=true --regions=eu-west-1
terraformer import aws --resources=vpc,subnet --filter=aws_vpc=vpc_id1:vpc_id2:vpc_id3 --regions=eu-west-1
List of support AWS services:
elbaws_elb
albaws_lbaws_lb_listeneraws_lb_listener_ruleaws_lb_listener_certificateaws_lb_target_groupaws_lb_target_group_attachment
auto_scalingaws_autoscaling_groupaws_launch_configurationaws_launch_template
rdsaws_db_instanceaws_db_parameter_groupaws_db_subnet_groupaws_db_option_groupaws_db_event_subscription
iamaws_iam_roleaws_iam_role_policyaws_iam_useraws_iam_user_group_membershipaws_iam_user_policyaws_iam_policy_attachmentaws_iam_policyaws_iam_groupaws_iam_group_membershipaws_iam_group_policy
igwaws_internet_gateway
naclaws_network_acl
s3aws_s3_bucketaws_s3_bucket_policy
sgaws_security_group
subnetaws_subnet
vpcaws_vpc
vpn_connectionaws_vpn_connection
vpn_gatewayaws_vpn_gateway
route53aws_route53_zoneaws_route53_record
acmaws_acm_certificate
elasticacheaws_elasticache_clusteraws_elasticache_parameter_groupaws_elasticache_subnet_groupaws_elasticache_replication_group
Example:
terraformer import openstack --resources=compute,networking --regions=RegionOne
List of support OpenStack services:
computeopenstack_compute_instance_v2
networkingopenstack_networking_secgroup_v2openstack_networking_secgroup_rule_v2
blockstorageopenstack_blockstorage_volume_v1openstack_blockstorage_volume_v2openstack_blockstorage_volume_v3
Example:
terraformer import kubernetes --resources=deployments,services,storageclasses
terraformer import kubernetes --resources=deployments,services,storageclasses --filter=kubernetes_deployment=name1:name2:name3
All of the kubernetes resources that are currently being supported by kubernetes provider are supported by this module as well. Here is the list of resources which are currently supported by kubernetes provider v.1.4:
clusterrolebindingkubernetes_cluster_role_binding
configmapskubernetes_config_map
deploymentskubernetes_deployment
horizontalpodautoscalerskubernetes_horizontal_pod_autoscaler
limitrangeskubernetes_limit_range
namespaceskubernetes_namespace
persistentvolumeskubernetes_persistent_volume
persistentvolumeclaimskubernetes_persistent_volume_claim
podskubernetes_pod
replicationcontrollerskubernetes_replication_controller
resourcequotaskubernetes_resource_quota
secretskubernetes_secret
serviceskubernetes_service
serviceaccountskubernetes_service_account
statefulsetskubernetes_stateful_set
storageclasseskubernetes_storage_class
- Terraform kubernetes provider is rejecting resources with ":" character in their names (As it's not meeting DNS-1123), while it's allowed for certain types in kubernetes, e.g. ClusterRoleBinding.
- As terraform flatmap is using "." to detect the keys for unflattening the maps, some keys with "." in their names are being considered as the maps.
- As the library is just assuming empty string as empty value (not "0"), there are some issues with optional integer keys that are restricted to be positive.
Example:
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --token=YOUR_TOKEN // or GITHUB_TOKEN in env
./terraformer import github --organizations=YOUR_ORGANIZATION --resources=repositories --filter=github_repository=id1:id2:id4 --token=YOUR_TOKEN // or GITHUB_TOKEN in env
Support only organizations resources. List of supported resources:
repositoriesgithub_repositorygithub_repository_webhookgithub_branch_protectiongithub_repository_collaboratorgithub_repository_deploy_key
teamsgithub_teamgithub_team_membershipgithub_team_repository
membersgithub_membership
organization_webhooksgithub_organization_webhook
Notes:
- Github API don't return webhook secrets. If you have secret in webhook, you get changes on
terraform plan=>configuration.#: "1" => "0"in tfstate only.
Example:
./terraformer import datadog --resources=monitor --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env
./terraformer import datadog --resources=monitor --filter=datadog_monitor=id1:id2:id4 --api-key=YOUR_DATADOG_API_KEY // or DATADOG_API_KEY in env --app-key=YOUR_DATADOG_APP_KEY // or DATADOG_APP_KEY in env
List of support Datadog services:
downtimedatadog_downtime
monitordatadog_monitor
screenboarddatadog_screenboard
syntheticsdatadog_synthetics_test
timeboarddatadog_timeboard
userdatadog_user
If you have improvements or fixes, we would love to have your contributions. Please read CONTRIBUTING.md for more information on the process we would like contributors to follow.
Terraformer built for easy to add new providers and not only cloud providers.
Process for generating tf + tfstate files:
- Call GCP/AWS/other api and get list of resources.
- Iterate over resources and take only ID (we don't need mapping fields!!!)
- Call to provider for readonly fields.
- Call to infrastructure and take tf + tfstate.
- Call to provider for refresh method and get all data.
- Convert refresh data to go struct.
- Generate HCL file -
tffiles. - Generate
tfstatefiles.
All mapping of resource is made by providers and Terraform. Upgrades are needed only for providers.
For GCP compute resources, use generated code from
providers/gcp/gcp_compute_code_generator.
To regenerate code:
go run providers/gcp/gcp_compute_code_generator/*.go