summaryrefslogtreecommitdiff
path: root/vendor/github.com/hashicorp
diff options
context:
space:
mode:
Diffstat (limited to 'vendor/github.com/hashicorp')
-rw-r--r--vendor/github.com/hashicorp/terraform/.travis.yml3
-rw-r--r--vendor/github.com/hashicorp/terraform/CHANGELOG.md29
-rw-r--r--vendor/github.com/hashicorp/terraform/Dockerfile1
-rw-r--r--vendor/github.com/hashicorp/terraform/Vagrantfile2
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/consul/backend_test.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/consul/client.go9
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend.go197
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend_test.go90
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client.go38
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client_test.go11
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go10
-rw-r--r--vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend_test.go23
-rw-r--r--vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner.go22
-rw-r--r--vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner_test.go33
-rw-r--r--vendor/github.com/hashicorp/terraform/command/hook_ui_test.go19
-rw-r--r--vendor/github.com/hashicorp/terraform/command/import.go18
-rw-r--r--vendor/github.com/hashicorp/terraform/command/import_test.go82
-rw-r--r--vendor/github.com/hashicorp/terraform/command/init.go7
-rw-r--r--vendor/github.com/hashicorp/terraform/command/init_test.go43
-rw-r--r--vendor/github.com/hashicorp/terraform/command/plugins.go12
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_meta.go20
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_mv.go67
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_mv_test.go233
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_push_test.go55
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_rm.go14
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_rm_test.go137
-rw-r--r--vendor/github.com/hashicorp/terraform/command/state_test.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/command/test-fixtures/import-provider-remote-state/main.tf12
-rw-r--r--vendor/github.com/hashicorp/terraform/command/test-fixtures/init-legacy-rc/main.tf1
-rw-r--r--vendor/github.com/hashicorp/terraform/command/test-fixtures/inmem-backend/main.tf3
-rw-r--r--vendor/github.com/hashicorp/terraform/command/unlock_test.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/command/workspace_command_test.go29
-rw-r--r--vendor/github.com/hashicorp/terraform/commands.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/config/loader_hcl.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/dag/walk.go16
-rw-r--r--vendor/github.com/hashicorp/terraform/helper/schema/field_reader_diff.go40
-rw-r--r--vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go16
-rw-r--r--vendor/github.com/hashicorp/terraform/helper/schema/resource_data_test.go252
-rw-r--r--vendor/github.com/hashicorp/terraform/helper/schema/set.go25
-rw-r--r--vendor/github.com/hashicorp/terraform/helper/schema/set_test.go87
-rw-r--r--vendor/github.com/hashicorp/terraform/plugin/client.go9
-rw-r--r--vendor/github.com/hashicorp/terraform/plugin/discovery/find.go1
-rw-r--r--vendor/github.com/hashicorp/terraform/plugin/discovery/get.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/plugin/discovery/get_test.go5
-rw-r--r--vendor/github.com/hashicorp/terraform/plugins.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/scripts/docker-release/Dockerfile-release2
-rw-r--r--vendor/github.com/hashicorp/terraform/scripts/docker-release/README.md92
-rwxr-xr-xvendor/github.com/hashicorp/terraform/scripts/docker-release/build.sh34
-rwxr-xr-xvendor/github.com/hashicorp/terraform/scripts/docker-release/hooks/build18
-rwxr-xr-xvendor/github.com/hashicorp/terraform/scripts/docker-release/push.sh20
-rwxr-xr-xvendor/github.com/hashicorp/terraform/scripts/docker-release/release.sh93
-rwxr-xr-xvendor/github.com/hashicorp/terraform/scripts/docker-release/tag.sh26
-rw-r--r--vendor/github.com/hashicorp/terraform/state/remote/state.go7
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/context_input_test.go54
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/context_validate_test.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/eval.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/eval_interpolate.go31
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/eval_state.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/graph.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/graph_builder_input.go3
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/node_data_refresh_test.go7
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/node_module_variable.go23
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh_test.go7
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/child/main.tf5
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/main.tf8
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/transform_module_variable.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/upgrade_state_v1_test.go7
-rw-r--r--vendor/github.com/hashicorp/terraform/terraform/version.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/tools/terraform-bundle/package.go19
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/LICENSE20
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/README.md74
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_unix.go12
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_windows.go13
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem.go247
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem_signal.go100
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/metrics.go115
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/sink.go52
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/start.go95
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsd.go154
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsite.go142
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any.go139
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go168
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.proto139
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/doc.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration.go102
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.pb.go146
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.proto117
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/regen.sh66
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp.go134
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.pb.go162
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto133
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/acl.go672
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/cache.go177
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/policy.go191
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/acl.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/agent.go28
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/api.go243
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/catalog.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/coordinate.go5
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/health.go1
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/kv.go23
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/lock.go29
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator.go152
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_area.go168
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_autopilot.go219
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_keyring.go83
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_raft.go86
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/semaphore.go29
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/session.go9
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/operator.go57
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go257
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go40
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/structs.go1041
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/txn.go85
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/README.md31
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/io.go61
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/retry/retry.go197
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server.go489
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_methods.go256
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_wrapper.go65
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/wait.go62
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/README.md39
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/checks.go5
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/node_id.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/LICENSE21
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/README.md123
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/global.go34
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/int.go385
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/log.go138
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stacktrace.go108
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stdlog.go62
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/LICENSE25
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/0doc.go143
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/README.md174
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/binc.go786
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/decode.go1048
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/encode.go1001
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper.go589
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper_internal.go127
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack.go816
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack_test.py110
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/rpc.go152
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/simple.go461
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/time.go193
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/README.md49
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/client.go267
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_client.go83
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_server.go115
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/log_entry.go73
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/plugin.go31
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/protocol.go45
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_client.go47
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_server.go20
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/server.go128
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/testing.go52
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/2q.go212
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/LICENSE362
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/README.md25
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/arc.go257
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/lru.go114
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go160
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/LICENSE354
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/Makefile17
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/README.md89
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/api.go1007
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commands.go151
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commitment.go101
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/config.go258
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/configuration.go343
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/discard_snapshot.go49
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/file_snapshot.go494
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/fsm.go136
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/future.go289
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_snapshot.go106
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_store.go125
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_transport.go322
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log.go72
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log_cache.go79
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/membership.md83
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/net_transport.go622
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/observer.go115
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/peersjson.go46
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/raft.go1456
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/replication.go561
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/snapshot.go239
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/stable.go15
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/state.go167
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/tcp_transport.go105
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/transport.go124
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/util.go133
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/README.md10
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/autocomplete.go43
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go249
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command.go20
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command_mock.go21
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/help.go4
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/ui_mock.go59
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/LICENSE21
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/README.md52
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing.go84
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing_go19.go80
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/LICENSE23
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/README.md52
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/appveyor.yml32
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/errors.go269
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/stack.go186
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/LICENSE.txt21
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/args.go75
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/cmd.go128
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/bash.go32
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/install.go92
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/utils.go118
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/zsh.go39
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/command.go106
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/complete.go86
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/log.go23
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/file.go19
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/match.go6
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/prefix.go9
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/metalinter.json21
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict.go41
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_files.go108
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_set.go19
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/readme.md116
-rwxr-xr-xvendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/test.sh12
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/utils.go46
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/ciphers.go641
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/client_conn_pool.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/configure_transport.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/databuffer.go146
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/errors.go13
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/fixed_buffer.go60
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/frame.go81
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go16.go27
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go18.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go19.go16
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/encode.go29
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/hpack.go104
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/tables.go255
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/http2.go8
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go16.go25
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go18.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go19.go16
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/pipe.go18
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/server.go442
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/transport.go256
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/writesched_priority.go2
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/LICENSE202
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go143
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/AUTHORS1
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/CONTRIBUTING.md60
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/LICENSE230
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/PATENTS22
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/README.md28
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/backoff.go18
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/balancer.go55
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/call.go149
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/clientconn.go601
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codec.go104
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codes/codes.go37
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/connectivity/connectivity.go72
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials.go47
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go17.go38
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go18.go38
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_pre_go17.go37
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/doc.go20
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go16.go98
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go17.go98
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb.go737
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go629
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.proto164
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/grpclog.go123
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/logger.go106
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/loggerv2.go195
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go176
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.proto34
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/health.go70
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/interceptor.go43
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/internal/internal.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/keepalive/keepalive.go65
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/metadata/metadata.go140
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/dns_resolver.go292
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go17.go34
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go18.go28
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/naming.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/peer/peer.go38
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/proxy.go130
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/rpc_util.go367
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/server.go543
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/handlers.go42
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/stats.go59
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/status/status.go168
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stream.go230
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/tap/tap.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/trace.go35
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/bdp_estimator.go143
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/control.go119
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go16.go49
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go17.go52
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/handler_server.go104
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_client.go688
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_server.go602
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http_util.go214
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/log.go50
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/pre_go16.go51
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/transport.go330
-rw-r--r--vendor/github.com/hashicorp/terraform/vendor/vendor.json262
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/backends/types/azure.html.md1
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/commands/import.html.md10
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/commands/init.html.markdown10
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/commands/state/mv.html.md29
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/commands/state/rm.html.md11
-rw-r--r--vendor/github.com/hashicorp/terraform/website/docs/plugins/provider.html.md2
-rw-r--r--vendor/github.com/hashicorp/terraform/website/guides/running-terraform-in-automation.html.md7
-rw-r--r--vendor/github.com/hashicorp/terraform/website/guides/terraform-provider-development-program.html.md127
-rw-r--r--vendor/github.com/hashicorp/terraform/website/guides/writing-custom-terraform-providers.html.md2
-rw-r--r--vendor/github.com/hashicorp/terraform/website/intro/examples/consul.html.markdown2
-rw-r--r--vendor/github.com/hashicorp/terraform/website/intro/examples/count.markdown2
-rw-r--r--vendor/github.com/hashicorp/terraform/website/intro/examples/index.html.markdown18
-rw-r--r--vendor/github.com/hashicorp/terraform/website/layouts/guides.erb3
321 files changed, 17687 insertions, 21918 deletions
diff --git a/vendor/github.com/hashicorp/terraform/.travis.yml b/vendor/github.com/hashicorp/terraform/.travis.yml
index b6623199..d37048d9 100644
--- a/vendor/github.com/hashicorp/terraform/.travis.yml
+++ b/vendor/github.com/hashicorp/terraform/.travis.yml
@@ -2,7 +2,8 @@ dist: trusty
sudo: false
language: go
go:
-- 1.8.1
+- 1.8.3
+- 1.9rc1
# add TF_CONSUL_TEST=1 to run consul tests
# they were causing timouts in travis
diff --git a/vendor/github.com/hashicorp/terraform/CHANGELOG.md b/vendor/github.com/hashicorp/terraform/CHANGELOG.md
index ebde7d2a..09414a05 100644
--- a/vendor/github.com/hashicorp/terraform/CHANGELOG.md
+++ b/vendor/github.com/hashicorp/terraform/CHANGELOG.md
@@ -1,3 +1,32 @@
+## 0.10.2 (August 16, 2017)
+
+BUG FIXES:
+
+* tools/terraform-bundle: Add missing Ui to ProviderInstaller (fix crash) ([#15826](https://github.com/hashicorp/terraform/issues/15826))
+* go-plugin: crash when server emits non-key-value JSON ([go-plugin#43](https://github.com/hashicorp/go-plugin/pull/43))
+
+## 0.10.1 (August 15, 2017)
+
+BUG FIXES:
+
+* Fix `terraform state rm` and `mv` commands to work correctly with remote state backends ([#15652](https://github.com/hashicorp/terraform/issues/15652))
+* Fix errors when interpolations fail during input ([#15780](https://github.com/hashicorp/terraform/issues/15780))
+* Backoff retries in remote-execution provisioner ([#15772](https://github.com/hashicorp/terraform/issues/15772))
+* Load plugins from `~/.terraform.d/plugins/OS_ARCH/` and `.terraformrc` ([#15769](https://github.com/hashicorp/terraform/issues/15769))
+* The `import` command was ignoring the remote state configuration ([#15768](https://github.com/hashicorp/terraform/issues/15768))
+* Don't allow leading slashes in s3 bucket names for remote state ([#15738](https://github.com/hashicorp/terraform/issues/15738))
+
+IMPROVEMENTS:
+
+* helper/schema: Add `GetOkExists` schema function ([#15723](https://github.com/hashicorp/terraform/issues/15723))
+* helper/schema: Make 'id' a reserved field name ([#15695](https://github.com/hashicorp/terraform/issues/15695))
+* command/init: Display version + source when initializing plugins ([#15804](https://github.com/hashicorp/terraform/issues/15804))
+
+INTERNAL CHANGES:
+
+* DiffFieldReader.ReadField caches results to optimize deeply nested schemas ([#15663](https://github.com/hashicorp/terraform/issues/15663))
+
+
## 0.10.0 (August 2, 2017)
**This is the complete 0.9.11 to 0.10.0 CHANGELOG**
diff --git a/vendor/github.com/hashicorp/terraform/Dockerfile b/vendor/github.com/hashicorp/terraform/Dockerfile
index d435c9f9..3863822c 100644
--- a/vendor/github.com/hashicorp/terraform/Dockerfile
+++ b/vendor/github.com/hashicorp/terraform/Dockerfile
@@ -14,6 +14,7 @@ MAINTAINER "HashiCorp Terraform Team <terraform@hashicorp.com>"
RUN apk add --update git bash openssh
ENV TF_DEV=true
+ENV TF_RELEASE=1
WORKDIR $GOPATH/src/github.com/hashicorp/terraform
COPY . .
diff --git a/vendor/github.com/hashicorp/terraform/Vagrantfile b/vendor/github.com/hashicorp/terraform/Vagrantfile
index 36833186..69d72df1 100644
--- a/vendor/github.com/hashicorp/terraform/Vagrantfile
+++ b/vendor/github.com/hashicorp/terraform/Vagrantfile
@@ -5,7 +5,7 @@
VAGRANTFILE_API_VERSION = "2"
# Software version variables
-GOVERSION = "1.8.1"
+GOVERSION = "1.8.3"
UBUNTUVERSION = "16.04"
# CPU and RAM can be adjusted depending on your system
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/backend_test.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/backend_test.go
index b75d2525..7c4bf5ee 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/backend_test.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/backend_test.go
@@ -22,7 +22,7 @@ func newConsulTestServer(t *testing.T) *testutil.TestServer {
t.Skip()
}
- srv := testutil.NewTestServerConfig(t, func(c *testutil.TestServerConfig) {
+ srv, _ := testutil.NewTestServerConfigT(t, func(c *testutil.TestServerConfig) {
c.LogLevel = "warn"
if !testing.Verbose() {
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/client.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/client.go
index a0013cd2..fe14e2c0 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/client.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/consul/client.go
@@ -367,14 +367,7 @@ func (c *RemoteClient) createSession() (string, error) {
log.Println("[INFO] created consul lock session", id)
// keep the session renewed
- // we need an adapter to convert the session Done() channel to a
- // non-directional channel to satisfy the RenewPeriodic signature.
- done := make(chan struct{})
- go func() {
- <-ctx.Done()
- close(done)
- }()
- go session.RenewPeriodic(lockSessionTTL, id, nil, done)
+ go session.RenewPeriodic(lockSessionTTL, id, nil, ctx.Done())
return id, nil
}
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend.go
index effa1381..5eab8d0c 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend.go
@@ -2,40 +2,207 @@ package inmem
import (
"context"
+ "errors"
+ "fmt"
+ "sort"
+ "sync"
+ "time"
"github.com/hashicorp/terraform/backend"
- "github.com/hashicorp/terraform/backend/remote-state"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/state/remote"
+ "github.com/hashicorp/terraform/terraform"
)
+// we keep the states and locks in package-level variables, so that they can be
+// accessed from multiple instances of the backend. This better emulates
+// backend instances accessing a single remote data store.
+var (
+ states stateMap
+ locks lockMap
+)
+
+func init() {
+ Reset()
+}
+
+// Reset clears out all existing state and lock data.
+// This is used to initialize the package during init, as well as between
+// tests.
+func Reset() {
+ states = stateMap{
+ m: map[string]*remote.State{},
+ }
+
+ locks = lockMap{
+ m: map[string]*state.LockInfo{},
+ }
+}
+
// New creates a new backend for Inmem remote state.
func New() backend.Backend {
- return &remotestate.Backend{
- ConfigureFunc: configure,
-
- // Set the schema
- Backend: &schema.Backend{
- Schema: map[string]*schema.Schema{
- "lock_id": &schema.Schema{
- Type: schema.TypeString,
- Optional: true,
- Description: "initializes the state in a locked configuration",
- },
+ // Set the schema
+ s := &schema.Backend{
+ Schema: map[string]*schema.Schema{
+ "lock_id": &schema.Schema{
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "initializes the state in a locked configuration",
},
},
}
+ backend := &Backend{Backend: s}
+ backend.Backend.ConfigureFunc = backend.configure
+ return backend
+}
+
+type Backend struct {
+ *schema.Backend
}
-func configure(ctx context.Context) (remote.Client, error) {
+func (b *Backend) configure(ctx context.Context) error {
+ states.Lock()
+ defer states.Unlock()
+
+ defaultClient := &RemoteClient{
+ Name: backend.DefaultStateName,
+ }
+
+ states.m[backend.DefaultStateName] = &remote.State{
+ Client: defaultClient,
+ }
+
+ // set the default client lock info per the test config
data := schema.FromContextBackendConfig(ctx)
if v, ok := data.GetOk("lock_id"); ok && v.(string) != "" {
info := state.NewLockInfo()
info.ID = v.(string)
info.Operation = "test"
info.Info = "test config"
- return &RemoteClient{LockInfo: info}, nil
+
+ locks.lock(backend.DefaultStateName, info)
+ }
+
+ return nil
+}
+
+func (b *Backend) States() ([]string, error) {
+ states.Lock()
+ defer states.Unlock()
+
+ var workspaces []string
+
+ for s := range states.m {
+ workspaces = append(workspaces, s)
+ }
+
+ sort.Strings(workspaces)
+ return workspaces, nil
+}
+
+func (b *Backend) DeleteState(name string) error {
+ states.Lock()
+ defer states.Unlock()
+
+ if name == backend.DefaultStateName || name == "" {
+ return fmt.Errorf("can't delete default state")
+ }
+
+ delete(states.m, name)
+ return nil
+}
+
+func (b *Backend) State(name string) (state.State, error) {
+ states.Lock()
+ defer states.Unlock()
+
+ s := states.m[name]
+ if s == nil {
+ s = &remote.State{
+ Client: &RemoteClient{
+ Name: name,
+ },
+ }
+ states.m[name] = s
+
+ // to most closely replicate other implementations, we are going to
+ // take a lock and create a new state if it doesn't exist.
+ lockInfo := state.NewLockInfo()
+ lockInfo.Operation = "init"
+ lockID, err := s.Lock(lockInfo)
+ if err != nil {
+ return nil, fmt.Errorf("failed to lock inmem state: %s", err)
+ }
+ defer s.Unlock(lockID)
+
+ // If we have no state, we have to create an empty state
+ if v := s.State(); v == nil {
+ if err := s.WriteState(terraform.NewState()); err != nil {
+ return nil, err
+ }
+ if err := s.PersistState(); err != nil {
+ return nil, err
+ }
+ }
}
- return &RemoteClient{}, nil
+
+ return s, nil
+}
+
+type stateMap struct {
+ sync.Mutex
+ m map[string]*remote.State
+}
+
+// Global level locks for inmem backends.
+type lockMap struct {
+ sync.Mutex
+ m map[string]*state.LockInfo
+}
+
+func (l *lockMap) lock(name string, info *state.LockInfo) (string, error) {
+ l.Lock()
+ defer l.Unlock()
+
+ lockInfo := l.m[name]
+ if lockInfo != nil {
+ lockErr := &state.LockError{
+ Info: lockInfo,
+ }
+
+ lockErr.Err = errors.New("state locked")
+ // make a copy of the lock info to avoid any testing shenanigans
+ *lockErr.Info = *lockInfo
+ return "", lockErr
+ }
+
+ info.Created = time.Now().UTC()
+ l.m[name] = info
+
+ return info.ID, nil
+}
+
+func (l *lockMap) unlock(name, id string) error {
+ l.Lock()
+ defer l.Unlock()
+
+ lockInfo := l.m[name]
+
+ if lockInfo == nil {
+ return errors.New("state not locked")
+ }
+
+ lockErr := &state.LockError{
+ Info: &state.LockInfo{},
+ }
+
+ if id != lockInfo.ID {
+ lockErr.Err = errors.New("invalid lock id")
+ *lockErr.Info = *lockInfo
+ return lockErr
+ }
+
+ delete(l.m, name)
+ return nil
}
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend_test.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend_test.go
new file mode 100644
index 00000000..005e66a1
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/backend_test.go
@@ -0,0 +1,90 @@
+package inmem
+
+import (
+ "testing"
+
+ "github.com/hashicorp/terraform/backend"
+ "github.com/hashicorp/terraform/state/remote"
+ "github.com/hashicorp/terraform/terraform"
+)
+
+func TestBackend_impl(t *testing.T) {
+ var _ backend.Backend = new(Backend)
+}
+
+func TestBackendConfig(t *testing.T) {
+ defer Reset()
+ testID := "test_lock_id"
+
+ config := map[string]interface{}{
+ "lock_id": testID,
+ }
+
+ b := backend.TestBackendConfig(t, New(), config).(*Backend)
+
+ s, err := b.State(backend.DefaultStateName)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ c := s.(*remote.State).Client.(*RemoteClient)
+ if c.Name != backend.DefaultStateName {
+ t.Fatal("client name is not configured")
+ }
+
+ if err := locks.unlock(backend.DefaultStateName, testID); err != nil {
+ t.Fatalf("default state should have been locked: %s", err)
+ }
+}
+
+func TestBackend(t *testing.T) {
+ defer Reset()
+ b := backend.TestBackendConfig(t, New(), nil).(*Backend)
+ backend.TestBackend(t, b, nil)
+}
+
+func TestBackendLocked(t *testing.T) {
+ defer Reset()
+ b1 := backend.TestBackendConfig(t, New(), nil).(*Backend)
+ b2 := backend.TestBackendConfig(t, New(), nil).(*Backend)
+
+ backend.TestBackend(t, b1, b2)
+}
+
+// use the this backen to test the remote.State implementation
+func TestRemoteState(t *testing.T) {
+ defer Reset()
+ b := backend.TestBackendConfig(t, New(), nil)
+
+ workspace := "workspace"
+
+ // create a new workspace in this backend
+ s, err := b.State(workspace)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // force overwriting the remote state
+ newState := terraform.NewState()
+
+ if err := s.WriteState(newState); err != nil {
+ t.Fatal(err)
+ }
+
+ if err := s.PersistState(); err != nil {
+ t.Fatal(err)
+ }
+
+ if err := s.RefreshState(); err != nil {
+ t.Fatal(err)
+ }
+
+ savedState := s.State()
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ if savedState.Lineage != newState.Lineage {
+ t.Fatal("saved state has incorrect lineage")
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client.go
index 703d4a26..51c8d725 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client.go
@@ -2,8 +2,6 @@ package inmem
import (
"crypto/md5"
- "errors"
- "time"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/state/remote"
@@ -13,8 +11,7 @@ import (
type RemoteClient struct {
Data []byte
MD5 []byte
-
- LockInfo *state.LockInfo
+ Name string
}
func (c *RemoteClient) Get() (*remote.Payload, error) {
@@ -43,37 +40,8 @@ func (c *RemoteClient) Delete() error {
}
func (c *RemoteClient) Lock(info *state.LockInfo) (string, error) {
- lockErr := &state.LockError{
- Info: &state.LockInfo{},
- }
-
- if c.LockInfo != nil {
- lockErr.Err = errors.New("state locked")
- // make a copy of the lock info to avoid any testing shenanigans
- *lockErr.Info = *c.LockInfo
- return "", lockErr
- }
-
- info.Created = time.Now().UTC()
- c.LockInfo = info
-
- return c.LockInfo.ID, nil
+ return locks.lock(c.Name, info)
}
-
func (c *RemoteClient) Unlock(id string) error {
- if c.LockInfo == nil {
- return errors.New("state not locked")
- }
-
- lockErr := &state.LockError{
- Info: &state.LockInfo{},
- }
- if id != c.LockInfo.ID {
- lockErr.Err = errors.New("invalid lock id")
- *lockErr.Info = *c.LockInfo
- return lockErr
- }
-
- c.LockInfo = nil
- return nil
+ return locks.unlock(c.Name, id)
}
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client_test.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client_test.go
index f3de5671..3a0fa8f3 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client_test.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/inmem/client_test.go
@@ -4,7 +4,6 @@ import (
"testing"
"github.com/hashicorp/terraform/backend"
- remotestate "github.com/hashicorp/terraform/backend/remote-state"
"github.com/hashicorp/terraform/state/remote"
)
@@ -14,11 +13,19 @@ func TestRemoteClient_impl(t *testing.T) {
}
func TestRemoteClient(t *testing.T) {
+ defer Reset()
b := backend.TestBackendConfig(t, New(), nil)
- remotestate.TestClient(t, b)
+
+ s, err := b.State(backend.DefaultStateName)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ remote.TestClient(t, s.(*remote.State).Client)
}
func TestInmemLocks(t *testing.T) {
+ defer Reset()
s, err := backend.TestBackendConfig(t, New(), nil).State(backend.DefaultStateName)
if err != nil {
t.Fatal(err)
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go
index 1a1e10ba..41cf037a 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend.go
@@ -2,6 +2,8 @@ package s3
import (
"context"
+ "fmt"
+ "strings"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/s3"
@@ -25,6 +27,14 @@ func New() backend.Backend {
Type: schema.TypeString,
Required: true,
Description: "The path to the state file inside the bucket",
+ ValidateFunc: func(v interface{}, s string) ([]string, []error) {
+ // s3 will strip leading slashes from an object, so while this will
+ // technically be accepted by s3, it will break our workspace hierarchy.
+ if strings.HasPrefix(v.(string), "/") {
+ return nil, []error{fmt.Errorf("key must not start with '/'")}
+ }
+ return nil, nil
+ },
},
"region": {
diff --git a/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend_test.go b/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend_test.go
index c5a1f500..83af43e4 100644
--- a/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend_test.go
+++ b/vendor/github.com/hashicorp/terraform/backend/remote-state/s3/backend_test.go
@@ -11,6 +11,7 @@ import (
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/hashicorp/terraform/backend"
+ "github.com/hashicorp/terraform/config"
"github.com/hashicorp/terraform/state/remote"
"github.com/hashicorp/terraform/terraform"
)
@@ -65,6 +66,28 @@ func TestBackendConfig(t *testing.T) {
}
}
+func TestBackendConfig_invalidKey(t *testing.T) {
+ testACC(t)
+ cfg := map[string]interface{}{
+ "region": "us-west-1",
+ "bucket": "tf-test",
+ "key": "/leading-slash",
+ "encrypt": true,
+ "dynamodb_table": "dynamoTable",
+ }
+
+ rawCfg, err := config.NewRawConfig(cfg)
+ if err != nil {
+ t.Fatal(err)
+ }
+ resCfg := terraform.NewResourceConfig(rawCfg)
+
+ _, errs := New().Validate(resCfg)
+ if len(errs) != 1 {
+ t.Fatal("expected config validation error")
+ }
+}
+
func TestBackend(t *testing.T) {
testACC(t)
diff --git a/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner.go b/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner.go
index 7dd86daf..ba811daf 100644
--- a/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner.go
+++ b/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner.go
@@ -19,6 +19,10 @@ import (
"github.com/mitchellh/go-linereader"
)
+// maxBackoffDealy is the maximum delay between retry attempts
+var maxBackoffDelay = 10 * time.Second
+var initialBackoffDelay = time.Second
+
func Provisioner() terraform.ResourceProvisioner {
return &schema.Provisioner{
Schema: map[string]*schema.Schema{
@@ -246,7 +250,6 @@ func copyOutput(
}
// retryFunc is used to retry a function for a given duration
-// TODO: this should probably backoff too
func retryFunc(ctx context.Context, timeout time.Duration, f func() error) error {
// Build a new context with the timeout
ctx, done := context.WithTimeout(ctx, timeout)
@@ -263,12 +266,13 @@ func retryFunc(ctx context.Context, timeout time.Duration, f func() error) error
go func() {
defer close(doneCh)
+ delay := time.Duration(0)
for {
// If our context ended, we want to exit right away.
select {
case <-ctx.Done():
return
- default:
+ case <-time.After(delay):
}
// Try the function call
@@ -279,7 +283,19 @@ func retryFunc(ctx context.Context, timeout time.Duration, f func() error) error
return
}
- log.Printf("Retryable error: %v", err)
+ log.Printf("[WARN] retryable error: %v", err)
+
+ delay *= 2
+
+ if delay == 0 {
+ delay = initialBackoffDelay
+ }
+
+ if delay > maxBackoffDelay {
+ delay = maxBackoffDelay
+ }
+
+ log.Printf("[INFO] sleeping for %s", delay)
}
}()
diff --git a/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner_test.go b/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner_test.go
index 67faf1fe..8c447788 100644
--- a/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner_test.go
+++ b/vendor/github.com/hashicorp/terraform/builtin/provisioners/remote-exec/resource_provisioner_test.go
@@ -211,6 +211,16 @@ func TestResourceProvider_CollectScripts_scriptsEmpty(t *testing.T) {
}
func TestRetryFunc(t *testing.T) {
+ origMax := maxBackoffDelay
+ maxBackoffDelay = time.Second
+ origStart := initialBackoffDelay
+ initialBackoffDelay = 10 * time.Millisecond
+
+ defer func() {
+ maxBackoffDelay = origMax
+ initialBackoffDelay = origStart
+ }()
+
// succeed on the third try
errs := []error{io.EOF, &net.OpError{Err: errors.New("ERROR")}, nil}
count := 0
@@ -235,6 +245,29 @@ func TestRetryFunc(t *testing.T) {
}
}
+func TestRetryFuncBackoff(t *testing.T) {
+ origMax := maxBackoffDelay
+ maxBackoffDelay = time.Second
+ origStart := initialBackoffDelay
+ initialBackoffDelay = 100 * time.Millisecond
+
+ defer func() {
+ maxBackoffDelay = origMax
+ initialBackoffDelay = origStart
+ }()
+
+ count := 0
+
+ retryFunc(context.Background(), time.Second, func() error {
+ count++
+ return io.EOF
+ })
+
+ if count > 4 {
+ t.Fatalf("retry func failed to backoff. called %d times", count)
+ }
+}
+
func testConfig(t *testing.T, c map[string]interface{}) *terraform.ResourceConfig {
r, err := config.NewRawConfig(c)
if err != nil {
diff --git a/vendor/github.com/hashicorp/terraform/command/hook_ui_test.go b/vendor/github.com/hashicorp/terraform/command/hook_ui_test.go
index 2ffa40ab..db2171b1 100644
--- a/vendor/github.com/hashicorp/terraform/command/hook_ui_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/hook_ui_test.go
@@ -1,7 +1,6 @@
package command
import (
- "bytes"
"fmt"
"testing"
"time"
@@ -12,11 +11,7 @@ import (
)
func TestUiHookPreApply_periodicTimer(t *testing.T) {
- ui := &cli.MockUi{
- InputReader: bytes.NewReader([]byte{}),
- ErrorWriter: bytes.NewBuffer([]byte{}),
- OutputWriter: bytes.NewBuffer([]byte{}),
- }
+ ui := cli.NewMockUi()
h := &UiHook{
Colorize: &colorstring.Colorize{
Colors: colorstring.DefaultColors,
@@ -88,11 +83,7 @@ data.aws_availability_zones.available: Still destroying... (ID: 2017-03-05 10:56
}
func TestUiHookPreApply_destroy(t *testing.T) {
- ui := &cli.MockUi{
- InputReader: bytes.NewReader([]byte{}),
- ErrorWriter: bytes.NewBuffer([]byte{}),
- OutputWriter: bytes.NewBuffer([]byte{}),
- }
+ ui := cli.NewMockUi()
h := &UiHook{
Colorize: &colorstring.Colorize{
Colors: colorstring.DefaultColors,
@@ -152,11 +143,7 @@ func TestUiHookPreApply_destroy(t *testing.T) {
}
func TestUiHookPostApply_emptyState(t *testing.T) {
- ui := &cli.MockUi{
- InputReader: bytes.NewReader([]byte{}),
- ErrorWriter: bytes.NewBuffer([]byte{}),
- OutputWriter: bytes.NewBuffer([]byte{}),
- }
+ ui := cli.NewMockUi()
h := &UiHook{
Colorize: &colorstring.Colorize{
Colors: colorstring.DefaultColors,
diff --git a/vendor/github.com/hashicorp/terraform/command/import.go b/vendor/github.com/hashicorp/terraform/command/import.go
index b1cc623e..e455bb53 100644
--- a/vendor/github.com/hashicorp/terraform/command/import.go
+++ b/vendor/github.com/hashicorp/terraform/command/import.go
@@ -123,15 +123,18 @@ func (c *ImportCommand) Run(args []string) int {
// Load the backend
b, err := c.Backend(&BackendOpts{
- Config: mod.Config(),
- ForceLocal: true,
+ Config: mod.Config(),
})
if err != nil {
c.Ui.Error(fmt.Sprintf("Failed to load backend: %s", err))
return 1
}
- // We require a local backend
+ // We require a backend.Local to build a context.
+ // This isn't necessarily a "local.Local" backend, which provides local
+ // operations, however that is the only current implementation. A
+ // "local.Local" backend also doesn't necessarily provide local state, as
+ // that may be delegated to a "remotestate.Backend".
local, ok := b.(backend.Local)
if !ok {
c.Ui.Error(ErrUnsupportedLocalOp)
@@ -232,11 +235,12 @@ Options:
specifying aliases, such as "aws.eu". Defaults to the
normal provider prefix of the resource being imported.
- -state=path Path to read and save state (unless state-out
- is specified). Defaults to "terraform.tfstate".
+ -state=PATH Path to the source state file. Defaults to the configured
+ backend, or "terraform.tfstate"
- -state-out=path Path to write updated state file. By default, the
- "-state" path will be used.
+ -state-out=PATH Path to the destination state file to write to. If this
+ isn't specified, the source state file will be used. This
+ can be a new or existing path.
-var 'foo=bar' Set a variable in the Terraform configuration. This
flag can be set multiple times. This is only useful
diff --git a/vendor/github.com/hashicorp/terraform/command/import_test.go b/vendor/github.com/hashicorp/terraform/command/import_test.go
index 057ab560..417e6143 100644
--- a/vendor/github.com/hashicorp/terraform/command/import_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/import_test.go
@@ -110,6 +110,88 @@ func TestImport_providerConfig(t *testing.T) {
testStateOutput(t, statePath, testImportStr)
}
+// "remote" state provided by the "local" backend
+func TestImport_remoteState(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("import-provider-remote-state"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ statePath := "imported.tfstate"
+
+ // init our backend
+ ui := new(cli.MockUi)
+ m := Meta{
+ testingOverrides: metaOverridesForProvider(testProvider()),
+ Ui: ui,
+ }
+
+ ic := &InitCommand{
+ Meta: m,
+ providerInstaller: &mockProviderInstaller{
+ Providers: map[string][]string{
+ "test": []string{"1.2.3"},
+ },
+
+ Dir: m.pluginDir(),
+ },
+ }
+
+ if code := ic.Run([]string{}); code != 0 {
+ t.Fatalf("bad: \n%s", ui.ErrorWriter)
+ }
+
+ p := testProvider()
+ ui = new(cli.MockUi)
+ c := &ImportCommand{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
+ }
+
+ p.ImportStateFn = nil
+ p.ImportStateReturn = []*terraform.InstanceState{
+ &terraform.InstanceState{
+ ID: "yay",
+ Ephemeral: terraform.EphemeralState{
+ Type: "test_instance",
+ },
+ },
+ }
+
+ configured := false
+ p.ConfigureFn = func(c *terraform.ResourceConfig) error {
+ configured = true
+
+ if v, ok := c.Get("foo"); !ok || v.(string) != "bar" {
+ return fmt.Errorf("bad value: %#v", v)
+ }
+
+ return nil
+ }
+
+ args := []string{
+ "test_instance.foo",
+ "bar",
+ }
+ if code := c.Run(args); code != 0 {
+ fmt.Println(ui.OutputWriter)
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
+ }
+
+ // Verify that we were called
+ if !configured {
+ t.Fatal("Configure should be called")
+ }
+
+ if !p.ImportStateCalled {
+ t.Fatal("ImportState should be called")
+ }
+
+ testStateOutput(t, statePath, testImportStr)
+}
+
func TestImport_providerConfigWithVar(t *testing.T) {
defer testChdir(t, testFixturePath("import-provider-var"))()
diff --git a/vendor/github.com/hashicorp/terraform/command/init.go b/vendor/github.com/hashicorp/terraform/command/init.go
index 427a8ce5..403ca245 100644
--- a/vendor/github.com/hashicorp/terraform/command/init.go
+++ b/vendor/github.com/hashicorp/terraform/command/init.go
@@ -75,6 +75,7 @@ func (c *InitCommand) Run(args []string) int {
Dir: c.pluginDir(),
PluginProtocolVersion: plugin.Handshake.ProtocolVersion,
SkipVerify: !flagVerifyPlugins,
+ Ui: c.Ui,
}
}
@@ -310,8 +311,12 @@ func (c *InitCommand) getProviders(path string, state *terraform.State, upgrade
var errs error
if c.getPlugins {
+ if len(missing) > 0 {
+ c.Ui.Output(fmt.Sprintf(" - Checking for available provider plugins on %s...",
+ discovery.GetReleaseHost()))
+ }
+
for provider, reqd := range missing {
- c.Ui.Output(fmt.Sprintf("- Downloading plugin for provider %q...", provider))
_, err := c.providerInstaller.Get(provider, reqd.Versions)
if err != nil {
diff --git a/vendor/github.com/hashicorp/terraform/command/init_test.go b/vendor/github.com/hashicorp/terraform/command/init_test.go
index ebfa397f..ef192b7e 100644
--- a/vendor/github.com/hashicorp/terraform/command/init_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/init_test.go
@@ -645,6 +645,49 @@ func TestInit_findVendoredProviders(t *testing.T) {
}
}
+// make sure we can locate providers defined in the legacy rc file
+func TestInit_rcProviders(t *testing.T) {
+ // Create a temporary working directory that is empty
+ td := tempDir(t)
+
+ configDirName := "init-legacy-rc"
+ copy.CopyDir(testFixturePath(configDirName), filepath.Join(td, configDirName))
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ pluginDir := filepath.Join(td, "custom")
+ pluginPath := filepath.Join(pluginDir, "terraform-provider-legacy")
+
+ ui := new(cli.MockUi)
+ m := Meta{
+ Ui: ui,
+ PluginOverrides: &PluginOverrides{
+ Providers: map[string]string{
+ "legacy": pluginPath,
+ },
+ },
+ }
+
+ c := &InitCommand{
+ Meta: m,
+ providerInstaller: &mockProviderInstaller{},
+ }
+
+ // make our plugin paths
+ if err := os.MkdirAll(pluginDir, 0755); err != nil {
+ t.Fatal(err)
+ }
+
+ if err := ioutil.WriteFile(pluginPath, []byte("test bin"), 0755); err != nil {
+ t.Fatal(err)
+ }
+
+ args := []string{configDirName}
+ if code := c.Run(args); code != 0 {
+ t.Fatalf("bad: \n%s", ui.ErrorWriter.String())
+ }
+}
+
func TestInit_getUpgradePlugins(t *testing.T) {
// Create a temporary working directory that is empty
td := tempDir(t)
diff --git a/vendor/github.com/hashicorp/terraform/command/plugins.go b/vendor/github.com/hashicorp/terraform/command/plugins.go
index ca94f07b..ce26b0f8 100644
--- a/vendor/github.com/hashicorp/terraform/command/plugins.go
+++ b/vendor/github.com/hashicorp/terraform/command/plugins.go
@@ -172,6 +172,12 @@ func (m *Meta) pluginDirs(includeAutoInstalled bool) []string {
// the defined search paths.
func (m *Meta) providerPluginSet() discovery.PluginMetaSet {
plugins := discovery.FindPlugins("provider", m.pluginDirs(true))
+
+ // Add providers defined in the legacy .terraformrc,
+ if m.PluginOverrides != nil {
+ plugins = plugins.OverridePaths(m.PluginOverrides.Providers)
+ }
+
plugins, _ = plugins.ValidateVersions()
for p := range plugins {
@@ -198,6 +204,12 @@ func (m *Meta) providerPluginAutoInstalledSet() discovery.PluginMetaSet {
// in all locations *except* the auto-install directory.
func (m *Meta) providerPluginManuallyInstalledSet() discovery.PluginMetaSet {
plugins := discovery.FindPlugins("provider", m.pluginDirs(false))
+
+ // Add providers defined in the legacy .terraformrc,
+ if m.PluginOverrides != nil {
+ plugins = plugins.OverridePaths(m.PluginOverrides.Providers)
+ }
+
plugins, _ = plugins.ValidateVersions()
for p := range plugins {
diff --git a/vendor/github.com/hashicorp/terraform/command/state_meta.go b/vendor/github.com/hashicorp/terraform/command/state_meta.go
index dc17fa07..aa79e9d4 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_meta.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_meta.go
@@ -11,30 +11,32 @@ import (
)
// StateMeta is the meta struct that should be embedded in state subcommands.
-type StateMeta struct{}
+type StateMeta struct {
+ Meta
+}
// State returns the state for this meta. This gets the appropriate state from
// the backend, but changes the way that backups are done. This configures
// backups to be timestamped rather than just the original state path plus a
// backup path.
-func (c *StateMeta) State(m *Meta) (state.State, error) {
+func (c *StateMeta) State() (state.State, error) {
var realState state.State
- backupPath := m.backupPath
- stateOutPath := m.statePath
+ backupPath := c.backupPath
+ stateOutPath := c.statePath
// use the specified state
- if m.statePath != "" {
+ if c.statePath != "" {
realState = &state.LocalState{
- Path: m.statePath,
+ Path: c.statePath,
}
} else {
// Load the backend
- b, err := m.Backend(nil)
+ b, err := c.Backend(nil)
if err != nil {
return nil, err
}
- env := m.Workspace()
+ env := c.Workspace()
// Get the state
s, err := b.State(env)
if err != nil {
@@ -42,7 +44,7 @@ func (c *StateMeta) State(m *Meta) (state.State, error) {
}
// Get a local backend
- localRaw, err := m.Backend(&BackendOpts{ForceLocal: true})
+ localRaw, err := c.Backend(&BackendOpts{ForceLocal: true})
if err != nil {
// This should never fail
panic(err)
diff --git a/vendor/github.com/hashicorp/terraform/command/state_mv.go b/vendor/github.com/hashicorp/terraform/command/state_mv.go
index 5c51f2da..e2f89c96 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_mv.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_mv.go
@@ -10,7 +10,6 @@ import (
// StateMvCommand is a Command implementation that shows a single resource.
type StateMvCommand struct {
- Meta
StateMeta
}
@@ -21,12 +20,13 @@ func (c *StateMvCommand) Run(args []string) int {
}
// We create two metas to track the two states
- var meta1, meta2 Meta
+ var backupPathOut, statePathOut string
+
cmdFlags := c.Meta.flagSet("state mv")
- cmdFlags.StringVar(&meta1.backupPath, "backup", "-", "backup")
- cmdFlags.StringVar(&meta1.statePath, "state", DefaultStateFilename, "path")
- cmdFlags.StringVar(&meta2.backupPath, "backup-out", "-", "backup")
- cmdFlags.StringVar(&meta2.statePath, "state-out", "", "path")
+ cmdFlags.StringVar(&c.backupPath, "backup", "-", "backup")
+ cmdFlags.StringVar(&c.statePath, "state", "", "path")
+ cmdFlags.StringVar(&backupPathOut, "backup-out", "-", "backup")
+ cmdFlags.StringVar(&statePathOut, "state-out", "", "path")
if err := cmdFlags.Parse(args); err != nil {
return cli.RunResultHelp
}
@@ -36,16 +36,11 @@ func (c *StateMvCommand) Run(args []string) int {
return cli.RunResultHelp
}
- // Copy the `-state` flag for output if we weren't given a custom one
- if meta2.statePath == "" {
- meta2.statePath = meta1.statePath
- }
-
// Read the from state
- stateFrom, err := c.StateMeta.State(&meta1)
+ stateFrom, err := c.State()
if err != nil {
c.Ui.Error(fmt.Sprintf(errStateLoadingState, err))
- return cli.RunResultHelp
+ return 1
}
if err := stateFrom.RefreshState(); err != nil {
@@ -62,11 +57,14 @@ func (c *StateMvCommand) Run(args []string) int {
// Read the destination state
stateTo := stateFrom
stateToReal := stateFromReal
- if meta2.statePath != meta1.statePath {
- stateTo, err = c.StateMeta.State(&meta2)
+
+ if statePathOut != "" {
+ c.statePath = statePathOut
+ c.backupPath = backupPathOut
+ stateTo, err = c.State()
if err != nil {
c.Ui.Error(fmt.Sprintf(errStateLoadingState, err))
- return cli.RunResultHelp
+ return 1
}
if err := stateTo.RefreshState(); err != nil {
@@ -185,28 +183,30 @@ func (c *StateMvCommand) addableResult(results []*terraform.StateFilterResult) i
func (c *StateMvCommand) Help() string {
helpText := `
-Usage: terraform state mv [options] ADDRESS ADDRESS
+Usage: terraform state mv [options] SOURCE DESTINATION
- Move an item in the state to another location or to a completely different
- state file.
+ This command will move an item matched by the address given to the
+ destination address. This command can also move to a destination address
+ in a completely different state file.
- This command is useful for module refactors (moving items into a module),
- configuration refactors (moving items to a completely different or new
- state file), or generally renaming of resources.
+ This can be used for simple resource renaming, moving items to and from
+ a module, moving entire modules, and more. And because this command can also
+ move data to a completely new state, it can also be used for refactoring
+ one configuration into multiple separately managed Terraform configurations.
- This command creates a timestamped backup of the state on every invocation.
- This can't be disabled. Due to the destructive nature of this command,
- the backup is ensured by Terraform for safety reasons.
+ This command will output a backup copy of the state prior to saving any
+ changes. The backup cannot be disabled. Due to the destructive nature
+ of this command, backups are required.
- If you're moving from one state file to a different state file, a backup
- will be created for each state file.
+ If you're moving an item to a different state file, a backup will be created
+ for each state file.
Options:
-backup=PATH Path where Terraform should write the backup for the original
state. This can't be disabled. If not set, Terraform
will write it to the same path as the statefile with
- a backup extension.
+ a ".backup" extension.
-backup-out=PATH Path where Terraform should write the backup for the destination
state. This can't be disabled. If not set, Terraform
@@ -215,13 +215,12 @@ Options:
to be specified if -state-out is set to a different path
than -state.
- -state=PATH Path to a Terraform state file to use to look
- up Terraform-managed resources. By default it will
- use the state "terraform.tfstate" if it exists.
+ -state=PATH Path to the source state file. Defaults to the configured
+ backend, or "terraform.tfstate"
- -state-out=PATH Path to the destination state file to move the item
- to. This defaults to the same statefile. This will
- overwrite the destination state file.
+ -state-out=PATH Path to the destination state file to write to. If this
+ isn't specified, the source state file will be used. This
+ can be a new or existing path.
`
return strings.TrimSpace(helpText)
diff --git a/vendor/github.com/hashicorp/terraform/command/state_mv_test.go b/vendor/github.com/hashicorp/terraform/command/state_mv_test.go
index d3f05b31..5a0d2ab4 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_mv_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_mv_test.go
@@ -47,9 +47,11 @@ func TestStateMv(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -133,9 +135,11 @@ func TestStateMv_explicitWithBackend(t *testing.T) {
p := testProvider()
ui = new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -194,9 +198,11 @@ func TestStateMv_backupExplicit(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -244,9 +250,11 @@ func TestStateMv_stateOutNew(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -316,9 +324,11 @@ func TestStateMv_stateOutExisting(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -357,9 +367,11 @@ func TestStateMv_noState(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -418,9 +430,11 @@ func TestStateMv_stateOutNew_count(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -596,9 +610,11 @@ func TestStateMv_stateOutNew_largeCount(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -677,9 +693,11 @@ func TestStateMv_stateOutNew_nestedModule(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateMvCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -705,6 +723,160 @@ func TestStateMv_stateOutNew_nestedModule(t *testing.T) {
testStateOutput(t, backups[0], testStateMvNestedModule_stateOutOriginal)
}
+func TestStateMv_withinBackend(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("backend-unchanged"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ state := &terraform.State{
+ Modules: []*terraform.ModuleState{
+ &terraform.ModuleState{
+ Path: []string{"root"},
+ Resources: map[string]*terraform.ResourceState{
+ "test_instance.foo": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "bar",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+
+ "test_instance.baz": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "foo",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+
+ // the local backend state file is "foo"
+ statePath := "local-state.tfstate"
+ backupPath := "local-state.backup"
+
+ f, err := os.Create(statePath)
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer f.Close()
+
+ if err := terraform.WriteState(state, f); err != nil {
+ t.Fatal(err)
+ }
+
+ p := testProvider()
+ ui := new(cli.MockUi)
+ c := &StateMvCommand{
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
+ },
+ }
+
+ args := []string{
+ "-backup", backupPath,
+ "test_instance.foo",
+ "test_instance.bar",
+ }
+ if code := c.Run(args); code != 0 {
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
+ }
+
+ testStateOutput(t, statePath, testStateMvOutput)
+ testStateOutput(t, backupPath, testStateMvOutputOriginal)
+}
+
+func TestStateMv_fromBackendToLocal(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("backend-unchanged"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ state := &terraform.State{
+ Modules: []*terraform.ModuleState{
+ &terraform.ModuleState{
+ Path: []string{"root"},
+ Resources: map[string]*terraform.ResourceState{
+ "test_instance.foo": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "bar",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+
+ "test_instance.baz": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "foo",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+
+ // the local backend state file is "foo"
+ statePath := "local-state.tfstate"
+
+ // real "local" state file
+ statePathOut := "real-local.tfstate"
+
+ f, err := os.Create(statePath)
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer f.Close()
+
+ if err := terraform.WriteState(state, f); err != nil {
+ t.Fatal(err)
+ }
+
+ p := testProvider()
+ ui := new(cli.MockUi)
+ c := &StateMvCommand{
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
+ },
+ }
+
+ args := []string{
+ "-state-out", statePathOut,
+ "test_instance.foo",
+ "test_instance.bar",
+ }
+ if code := c.Run(args); code != 0 {
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
+ }
+
+ testStateOutput(t, statePathOut, testStateMvCount_stateOutSrc)
+
+ // the backend state should be left with only baz
+ testStateOutput(t, statePath, testStateMvOriginal_backend)
+}
+
const testStateMvOutputOriginal = `
test_instance.baz:
ID = foo
@@ -943,3 +1115,10 @@ const testStateMvExisting_stateDstOriginal = `
test_instance.qux:
ID = bar
`
+
+const testStateMvOriginal_backend = `
+test_instance.baz:
+ ID = foo
+ bar = value
+ foo = value
+`
diff --git a/vendor/github.com/hashicorp/terraform/command/state_push_test.go b/vendor/github.com/hashicorp/terraform/command/state_push_test.go
index 2e2e8700..bee9d477 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_push_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_push_test.go
@@ -5,6 +5,8 @@ import (
"os"
"testing"
+ "github.com/hashicorp/terraform/backend"
+ "github.com/hashicorp/terraform/backend/remote-state/inmem"
"github.com/hashicorp/terraform/helper/copy"
"github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/cli"
@@ -190,3 +192,56 @@ func TestStatePush_serialOlder(t *testing.T) {
t.Fatalf("bad: %#v", actual)
}
}
+
+func TestStatePush_forceRemoteState(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("inmem-backend"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+ defer inmem.Reset()
+
+ s := terraform.NewState()
+ statePath := testStateFile(t, s)
+
+ // init the backend
+ ui := new(cli.MockUi)
+ initCmd := &InitCommand{
+ Meta: Meta{Ui: ui},
+ }
+ if code := initCmd.Run([]string{}); code != 0 {
+ t.Fatalf("bad: \n%s", ui.ErrorWriter.String())
+ }
+
+ // create a new workspace
+ ui = new(cli.MockUi)
+ newCmd := &WorkspaceNewCommand{
+ Meta: Meta{Ui: ui},
+ }
+ if code := newCmd.Run([]string{"test"}); code != 0 {
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter)
+ }
+
+ // put a dummy state in place, so we have something to force
+ b := backend.TestBackendConfig(t, inmem.New(), nil)
+ sMgr, err := b.State("test")
+ if err != nil {
+ t.Fatal(err)
+ }
+ if err := sMgr.WriteState(terraform.NewState()); err != nil {
+ t.Fatal(err)
+ }
+ if err := sMgr.PersistState(); err != nil {
+ t.Fatal(err)
+ }
+
+ // push our local state to that new workspace
+ ui = new(cli.MockUi)
+ c := &StatePushCommand{
+ Meta: Meta{Ui: ui},
+ }
+
+ args := []string{"-force", statePath}
+ if code := c.Run(args); code != 0 {
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/command/state_rm.go b/vendor/github.com/hashicorp/terraform/command/state_rm.go
index 40dc8705..e106afb8 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_rm.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_rm.go
@@ -9,7 +9,6 @@ import (
// StateRmCommand is a Command implementation that shows a single resource.
type StateRmCommand struct {
- Meta
StateMeta
}
@@ -20,8 +19,8 @@ func (c *StateRmCommand) Run(args []string) int {
}
cmdFlags := c.Meta.flagSet("state show")
- cmdFlags.StringVar(&c.Meta.backupPath, "backup", "-", "backup")
- cmdFlags.StringVar(&c.Meta.statePath, "state", DefaultStateFilename, "path")
+ cmdFlags.StringVar(&c.backupPath, "backup", "-", "backup")
+ cmdFlags.StringVar(&c.statePath, "state", "", "path")
if err := cmdFlags.Parse(args); err != nil {
return cli.RunResultHelp
}
@@ -32,10 +31,10 @@ func (c *StateRmCommand) Run(args []string) int {
return 1
}
- state, err := c.StateMeta.State(&c.Meta)
+ state, err := c.State()
if err != nil {
c.Ui.Error(fmt.Sprintf(errStateLoadingState, err))
- return cli.RunResultHelp
+ return 1
}
if err := state.RefreshState(); err != nil {
c.Ui.Error(fmt.Sprintf("Failed to load state: %s", err))
@@ -88,9 +87,8 @@ Options:
will write it to the same path as the statefile with
a backup extension.
- -state=statefile Path to a Terraform state file to use to look
- up Terraform-managed resources. By default it will
- use the state "terraform.tfstate" if it exists.
+ -state=PATH Path to the source state file. Defaults to the configured
+ backend, or "terraform.tfstate"
`
return strings.TrimSpace(helpText)
diff --git a/vendor/github.com/hashicorp/terraform/command/state_rm_test.go b/vendor/github.com/hashicorp/terraform/command/state_rm_test.go
index 4a9a41c2..3a447779 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_rm_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_rm_test.go
@@ -6,6 +6,7 @@ import (
"strings"
"testing"
+ "github.com/hashicorp/terraform/helper/copy"
"github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/cli"
)
@@ -47,9 +48,11 @@ func TestStateRm(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateRmCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -109,9 +112,11 @@ func TestStateRmNoArgs(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateRmCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -169,9 +174,11 @@ func TestStateRm_backupExplicit(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateRmCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -198,9 +205,11 @@ func TestStateRm_noState(t *testing.T) {
p := testProvider()
ui := new(cli.MockUi)
c := &StateRmCommand{
- Meta: Meta{
- testingOverrides: metaOverridesForProvider(p),
- Ui: ui,
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
},
}
@@ -210,6 +219,110 @@ func TestStateRm_noState(t *testing.T) {
}
}
+func TestStateRm_needsInit(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("backend-change"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ p := testProvider()
+ ui := new(cli.MockUi)
+ c := &StateRmCommand{
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
+ },
+ }
+
+ args := []string{"foo"}
+ if code := c.Run(args); code == 0 {
+ t.Fatal("expected error\noutput:", ui.OutputWriter)
+ }
+
+ if !strings.Contains(ui.ErrorWriter.String(), "Initialization") {
+ t.Fatal("expected initialization error, got:\n", ui.ErrorWriter)
+ }
+}
+
+func TestStateRm_backendState(t *testing.T) {
+ td := tempDir(t)
+ copy.CopyDir(testFixturePath("backend-unchanged"), td)
+ defer os.RemoveAll(td)
+ defer testChdir(t, td)()
+
+ state := &terraform.State{
+ Modules: []*terraform.ModuleState{
+ &terraform.ModuleState{
+ Path: []string{"root"},
+ Resources: map[string]*terraform.ResourceState{
+ "test_instance.foo": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "bar",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+
+ "test_instance.bar": &terraform.ResourceState{
+ Type: "test_instance",
+ Primary: &terraform.InstanceState{
+ ID: "foo",
+ Attributes: map[string]string{
+ "foo": "value",
+ "bar": "value",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+
+ // the local backend state file is "foo"
+ statePath := "local-state.tfstate"
+ backupPath := "local-state.backup"
+
+ f, err := os.Create(statePath)
+ if err != nil {
+ t.Fatal(err)
+ }
+ defer f.Close()
+
+ if err := terraform.WriteState(state, f); err != nil {
+ t.Fatal(err)
+ }
+
+ p := testProvider()
+ ui := new(cli.MockUi)
+ c := &StateRmCommand{
+ StateMeta{
+ Meta: Meta{
+ testingOverrides: metaOverridesForProvider(p),
+ Ui: ui,
+ },
+ },
+ }
+
+ args := []string{
+ "-backup", backupPath,
+ "test_instance.foo",
+ }
+ if code := c.Run(args); code != 0 {
+ t.Fatalf("bad: %d\n\n%s", code, ui.ErrorWriter.String())
+ }
+
+ // Test it is correct
+ testStateOutput(t, statePath, testStateRmOutput)
+
+ // Test backup
+ testStateOutput(t, backupPath, testStateRmOutputOriginal)
+}
+
const testStateRmOutputOriginal = `
test_instance.bar:
ID = foo
diff --git a/vendor/github.com/hashicorp/terraform/command/state_test.go b/vendor/github.com/hashicorp/terraform/command/state_test.go
index 28d64e35..433c6336 100644
--- a/vendor/github.com/hashicorp/terraform/command/state_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/state_test.go
@@ -28,7 +28,7 @@ func TestStateDefaultBackupExtension(t *testing.T) {
tmp, cwd := testCwd(t)
defer testFixCwd(t, tmp, cwd)
- s, err := (&StateMeta{}).State(&Meta{})
+ s, err := (&StateMeta{}).State()
if err != nil {
t.Fatal(err)
}
diff --git a/vendor/github.com/hashicorp/terraform/command/test-fixtures/import-provider-remote-state/main.tf b/vendor/github.com/hashicorp/terraform/command/test-fixtures/import-provider-remote-state/main.tf
new file mode 100644
index 00000000..23ebfb4c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/command/test-fixtures/import-provider-remote-state/main.tf
@@ -0,0 +1,12 @@
+terraform {
+ backend "local" {
+ path = "imported.tfstate"
+ }
+}
+
+provider "test" {
+ foo = "bar"
+}
+
+resource "test_instance" "foo" {
+}
diff --git a/vendor/github.com/hashicorp/terraform/command/test-fixtures/init-legacy-rc/main.tf b/vendor/github.com/hashicorp/terraform/command/test-fixtures/init-legacy-rc/main.tf
new file mode 100644
index 00000000..4b04a89e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/command/test-fixtures/init-legacy-rc/main.tf
@@ -0,0 +1 @@
+provider "legacy" {}
diff --git a/vendor/github.com/hashicorp/terraform/command/test-fixtures/inmem-backend/main.tf b/vendor/github.com/hashicorp/terraform/command/test-fixtures/inmem-backend/main.tf
new file mode 100644
index 00000000..df9309a5
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/command/test-fixtures/inmem-backend/main.tf
@@ -0,0 +1,3 @@
+terraform {
+ backend "inmem" {}
+}
diff --git a/vendor/github.com/hashicorp/terraform/command/unlock_test.go b/vendor/github.com/hashicorp/terraform/command/unlock_test.go
index 342df3b6..b6dfceb4 100644
--- a/vendor/github.com/hashicorp/terraform/command/unlock_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/unlock_test.go
@@ -4,6 +4,7 @@ import (
"os"
"testing"
+ "github.com/hashicorp/terraform/backend/remote-state/inmem"
"github.com/hashicorp/terraform/helper/copy"
"github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/cli"
@@ -57,6 +58,7 @@ func TestUnlock_inmemBackend(t *testing.T) {
copy.CopyDir(testFixturePath("backend-inmem-locked"), td)
defer os.RemoveAll(td)
defer testChdir(t, td)()
+ defer inmem.Reset()
// init backend
ui := new(cli.MockUi)
diff --git a/vendor/github.com/hashicorp/terraform/command/workspace_command_test.go b/vendor/github.com/hashicorp/terraform/command/workspace_command_test.go
index cfa261b4..7baabbed 100644
--- a/vendor/github.com/hashicorp/terraform/command/workspace_command_test.go
+++ b/vendor/github.com/hashicorp/terraform/command/workspace_command_test.go
@@ -9,6 +9,8 @@ import (
"github.com/hashicorp/terraform/backend"
"github.com/hashicorp/terraform/backend/local"
+ "github.com/hashicorp/terraform/backend/remote-state/inmem"
+ "github.com/hashicorp/terraform/helper/copy"
"github.com/hashicorp/terraform/state"
"github.com/hashicorp/terraform/terraform"
"github.com/mitchellh/cli"
@@ -211,9 +213,19 @@ func TestWorkspace_createInvalid(t *testing.T) {
func TestWorkspace_createWithState(t *testing.T) {
td := tempDir(t)
- os.MkdirAll(td, 0755)
+ copy.CopyDir(testFixturePath("inmem-backend"), td)
defer os.RemoveAll(td)
defer testChdir(t, td)()
+ defer inmem.Reset()
+
+ // init the backend
+ ui := new(cli.MockUi)
+ initCmd := &InitCommand{
+ Meta: Meta{Ui: ui},
+ }
+ if code := initCmd.Run([]string{}); code != 0 {
+ t.Fatalf("bad: \n%s", ui.ErrorWriter.String())
+ }
// create a non-empty state
originalState := &terraform.State{
@@ -237,8 +249,10 @@ func TestWorkspace_createWithState(t *testing.T) {
t.Fatal(err)
}
- args := []string{"-state", "test.tfstate", "test"}
- ui := new(cli.MockUi)
+ workspace := "test_workspace"
+
+ args := []string{"-state", "test.tfstate", workspace}
+ ui = new(cli.MockUi)
newCmd := &WorkspaceNewCommand{
Meta: Meta{Ui: ui},
}
@@ -253,7 +267,14 @@ func TestWorkspace_createWithState(t *testing.T) {
t.Fatal(err)
}
- newState := envState.State()
+ b := backend.TestBackendConfig(t, inmem.New(), nil)
+ sMgr, err := b.State(workspace)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ newState := sMgr.State()
+
originalState.Version = newState.Version // the round-trip through the state manager implicitly populates version
if !originalState.Equal(newState) {
t.Fatalf("states not equal\norig: %s\nnew: %s", originalState, newState)
diff --git a/vendor/github.com/hashicorp/terraform/commands.go b/vendor/github.com/hashicorp/terraform/commands.go
index 85f1794b..910245a6 100644
--- a/vendor/github.com/hashicorp/terraform/commands.go
+++ b/vendor/github.com/hashicorp/terraform/commands.go
@@ -276,13 +276,17 @@ func init() {
"state rm": func() (cli.Command, error) {
return &command.StateRmCommand{
- Meta: meta,
+ StateMeta: command.StateMeta{
+ Meta: meta,
+ },
}, nil
},
"state mv": func() (cli.Command, error) {
return &command.StateMvCommand{
- Meta: meta,
+ StateMeta: command.StateMeta{
+ Meta: meta,
+ },
}, nil
},
diff --git a/vendor/github.com/hashicorp/terraform/config/loader_hcl.go b/vendor/github.com/hashicorp/terraform/config/loader_hcl.go
index e85e4935..bcd4d43a 100644
--- a/vendor/github.com/hashicorp/terraform/config/loader_hcl.go
+++ b/vendor/github.com/hashicorp/terraform/config/loader_hcl.go
@@ -21,6 +21,7 @@ var ReservedResourceFields = []string{
"connection",
"count",
"depends_on",
+ "id",
"lifecycle",
"provider",
"provisioner",
@@ -28,6 +29,7 @@ var ReservedResourceFields = []string{
var ReservedProviderFields = []string{
"alias",
+ "id",
"version",
}
diff --git a/vendor/github.com/hashicorp/terraform/dag/walk.go b/vendor/github.com/hashicorp/terraform/dag/walk.go
index 23c87adc..f03b1003 100644
--- a/vendor/github.com/hashicorp/terraform/dag/walk.go
+++ b/vendor/github.com/hashicorp/terraform/dag/walk.go
@@ -166,7 +166,7 @@ func (w *Walker) Update(g *AcyclicGraph) {
w.wait.Add(1)
// Add to our own set so we know about it already
- log.Printf("[DEBUG] dag/walk: added new vertex: %q", VertexName(v))
+ log.Printf("[TRACE] dag/walk: added new vertex: %q", VertexName(v))
w.vertices.Add(raw)
// Initialize the vertex info
@@ -198,7 +198,7 @@ func (w *Walker) Update(g *AcyclicGraph) {
// Delete it out of the map
delete(w.vertexMap, v)
- log.Printf("[DEBUG] dag/walk: removed vertex: %q", VertexName(v))
+ log.Printf("[TRACE] dag/walk: removed vertex: %q", VertexName(v))
w.vertices.Delete(raw)
}
@@ -229,7 +229,7 @@ func (w *Walker) Update(g *AcyclicGraph) {
changedDeps.Add(waiter)
log.Printf(
- "[DEBUG] dag/walk: added edge: %q waiting on %q",
+ "[TRACE] dag/walk: added edge: %q waiting on %q",
VertexName(waiter), VertexName(dep))
w.edges.Add(raw)
}
@@ -253,7 +253,7 @@ func (w *Walker) Update(g *AcyclicGraph) {
changedDeps.Add(waiter)
log.Printf(
- "[DEBUG] dag/walk: removed edge: %q waiting on %q",
+ "[TRACE] dag/walk: removed edge: %q waiting on %q",
VertexName(waiter), VertexName(dep))
w.edges.Delete(raw)
}
@@ -296,7 +296,7 @@ func (w *Walker) Update(g *AcyclicGraph) {
info.depsCancelCh = cancelCh
log.Printf(
- "[DEBUG] dag/walk: dependencies changed for %q, sending new deps",
+ "[TRACE] dag/walk: dependencies changed for %q, sending new deps",
VertexName(v))
// Start the waiter
@@ -383,10 +383,10 @@ func (w *Walker) walkVertex(v Vertex, info *walkerVertex) {
// Run our callback or note that our upstream failed
var err error
if depsSuccess {
- log.Printf("[DEBUG] dag/walk: walking %q", VertexName(v))
+ log.Printf("[TRACE] dag/walk: walking %q", VertexName(v))
err = w.Callback(v)
} else {
- log.Printf("[DEBUG] dag/walk: upstream errored, not walking %q", VertexName(v))
+ log.Printf("[TRACE] dag/walk: upstream errored, not walking %q", VertexName(v))
err = errWalkUpstream
}
@@ -423,7 +423,7 @@ func (w *Walker) waitDeps(
return
case <-time.After(time.Second * 5):
- log.Printf("[DEBUG] dag/walk: vertex %q, waiting for: %q",
+ log.Printf("[TRACE] dag/walk: vertex %q, waiting for: %q",
VertexName(v), VertexName(dep))
}
}
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/field_reader_diff.go b/vendor/github.com/hashicorp/terraform/helper/schema/field_reader_diff.go
index 16bbae29..644b93e6 100644
--- a/vendor/github.com/hashicorp/terraform/helper/schema/field_reader_diff.go
+++ b/vendor/github.com/hashicorp/terraform/helper/schema/field_reader_diff.go
@@ -29,29 +29,59 @@ type DiffFieldReader struct {
Diff *terraform.InstanceDiff
Source FieldReader
Schema map[string]*Schema
+
+ // cache for memoizing ReadField calls.
+ cache map[string]cachedFieldReadResult
+}
+
+type cachedFieldReadResult struct {
+ val FieldReadResult
+ err error
}
func (r *DiffFieldReader) ReadField(address []string) (FieldReadResult, error) {
+ if r.cache == nil {
+ r.cache = make(map[string]cachedFieldReadResult)
+ }
+
+ // Create the cache key by joining around a value that isn't a valid part
+ // of an address. This assumes that the Source and Schema are not changed
+ // for the life of this DiffFieldReader.
+ cacheKey := strings.Join(address, "|")
+ if cached, ok := r.cache[cacheKey]; ok {
+ return cached.val, cached.err
+ }
+
schemaList := addrToSchema(address, r.Schema)
if len(schemaList) == 0 {
+ r.cache[cacheKey] = cachedFieldReadResult{}
return FieldReadResult{}, nil
}
+ var res FieldReadResult
+ var err error
+
schema := schemaList[len(schemaList)-1]
switch schema.Type {
case TypeBool, TypeInt, TypeFloat, TypeString:
- return r.readPrimitive(address, schema)
+ res, err = r.readPrimitive(address, schema)
case TypeList:
- return readListField(r, address, schema)
+ res, err = readListField(r, address, schema)
case TypeMap:
- return r.readMap(address, schema)
+ res, err = r.readMap(address, schema)
case TypeSet:
- return r.readSet(address, schema)
+ res, err = r.readSet(address, schema)
case typeObject:
- return readObjectField(r, address, schema.Elem.(map[string]*Schema))
+ res, err = readObjectField(r, address, schema.Elem.(map[string]*Schema))
default:
panic(fmt.Sprintf("Unknown type: %#v", schema.Type))
}
+
+ r.cache[cacheKey] = cachedFieldReadResult{
+ val: res,
+ err: err,
+ }
+ return res, err
}
func (r *DiffFieldReader) readMap(
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go
index b2bc8f6c..15aa0b5d 100644
--- a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go
+++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data.go
@@ -104,6 +104,22 @@ func (d *ResourceData) GetOk(key string) (interface{}, bool) {
return r.Value, exists
}
+// GetOkExists returns the data for a given key and whether or not the key
+// has been set to a non-zero value. This is only useful for determining
+// if boolean attributes have been set, if they are Optional but do not
+// have a Default value.
+//
+// This is nearly the same function as GetOk, yet it does not check
+// for the zero value of the attribute's type. This allows for attributes
+// without a default, to fully check for a literal assignment, regardless
+// of the zero-value for that type.
+// This should only be used if absolutely required/needed.
+func (d *ResourceData) GetOkExists(key string) (interface{}, bool) {
+ r := d.getRaw(key, getSourceSet)
+ exists := r.Exists && !r.Computed
+ return r.Value, exists
+}
+
func (d *ResourceData) getRaw(key string, level getSource) getResult {
var parts []string
if key != "" {
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data_test.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data_test.go
index 615a0f7f..09aefb8f 100644
--- a/vendor/github.com/hashicorp/terraform/helper/schema/resource_data_test.go
+++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource_data_test.go
@@ -1082,6 +1082,258 @@ func TestResourceDataGetOk(t *testing.T) {
}
}
+func TestResourceDataGetOkExists(t *testing.T) {
+ cases := []struct {
+ Name string
+ Schema map[string]*Schema
+ State *terraform.InstanceState
+ Diff *terraform.InstanceDiff
+ Key string
+ Value interface{}
+ Ok bool
+ }{
+ /*
+ * Primitives
+ */
+ {
+ Name: "string-literal-empty",
+ Schema: map[string]*Schema{
+ "availability_zone": {
+ Type: TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
+ },
+
+ State: nil,
+
+ Diff: &terraform.InstanceDiff{
+ Attributes: map[string]*terraform.ResourceAttrDiff{
+ "availability_zone": {
+ Old: "",
+ New: "",
+ },
+ },
+ },
+
+ Key: "availability_zone",
+ Value: "",
+ Ok: true,
+ },
+
+ {
+ Name: "string-computed-empty",
+ Schema: map[string]*Schema{
+ "availability_zone": {
+ Type: TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
+ },
+
+ State: nil,
+
+ Diff: &terraform.InstanceDiff{
+ Attributes: map[string]*terraform.ResourceAttrDiff{
+ "availability_zone": {
+ Old: "",
+ New: "",
+ NewComputed: true,
+ },
+ },
+ },
+
+ Key: "availability_zone",
+ Value: "",
+ Ok: false,
+ },
+
+ {
+ Name: "string-optional-computed-nil-diff",
+ Schema: map[string]*Schema{
+ "availability_zone": {
+ Type: TypeString,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
+ },
+
+ State: nil,
+
+ Diff: nil,
+
+ Key: "availability_zone",
+ Value: "",
+ Ok: false,
+ },
+
+ /*
+ * Lists
+ */
+
+ {
+ Name: "list-optional",
+ Schema: map[string]*Schema{
+ "ports": {
+ Type: TypeList,
+ Optional: true,
+ Elem: &Schema{Type: TypeInt},
+ },
+ },
+
+ State: nil,
+
+ Diff: nil,
+
+ Key: "ports",
+ Value: []interface{}{},
+ Ok: false,
+ },
+
+ /*
+ * Map
+ */
+
+ {
+ Name: "map-optional",
+ Schema: map[string]*Schema{
+ "ports": {
+ Type: TypeMap,
+ Optional: true,
+ },
+ },
+
+ State: nil,
+
+ Diff: nil,
+
+ Key: "ports",
+ Value: map[string]interface{}{},
+ Ok: false,
+ },
+
+ /*
+ * Set
+ */
+
+ {
+ Name: "set-optional",
+ Schema: map[string]*Schema{
+ "ports": {
+ Type: TypeSet,
+ Optional: true,
+ Elem: &Schema{Type: TypeInt},
+ Set: func(a interface{}) int { return a.(int) },
+ },
+ },
+
+ State: nil,
+
+ Diff: nil,
+
+ Key: "ports",
+ Value: []interface{}{},
+ Ok: false,
+ },
+
+ {
+ Name: "set-optional-key",
+ Schema: map[string]*Schema{
+ "ports": {
+ Type: TypeSet,
+ Optional: true,
+ Elem: &Schema{Type: TypeInt},
+ Set: func(a interface{}) int { return a.(int) },
+ },
+ },
+
+ State: nil,
+
+ Diff: nil,
+
+ Key: "ports.0",
+ Value: 0,
+ Ok: false,
+ },
+
+ {
+ Name: "bool-literal-empty",
+ Schema: map[string]*Schema{
+ "availability_zone": {
+ Type: TypeBool,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
+ },
+
+ State: nil,
+ Diff: &terraform.InstanceDiff{
+ Attributes: map[string]*terraform.ResourceAttrDiff{
+ "availability_zone": {
+ Old: "",
+ New: "",
+ },
+ },
+ },
+
+ Key: "availability_zone",
+ Value: false,
+ Ok: true,
+ },
+
+ {
+ Name: "bool-literal-set",
+ Schema: map[string]*Schema{
+ "availability_zone": {
+ Type: TypeBool,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
+ },
+
+ State: nil,
+
+ Diff: &terraform.InstanceDiff{
+ Attributes: map[string]*terraform.ResourceAttrDiff{
+ "availability_zone": {
+ New: "true",
+ },
+ },
+ },
+
+ Key: "availability_zone",
+ Value: true,
+ Ok: true,
+ },
+ }
+
+ for i, tc := range cases {
+ t.Run(fmt.Sprintf("%d-%s", i, tc.Name), func(t *testing.T) {
+ d, err := schemaMap(tc.Schema).Data(tc.State, tc.Diff)
+ if err != nil {
+ t.Fatalf("%s err: %s", tc.Name, err)
+ }
+
+ v, ok := d.GetOkExists(tc.Key)
+ if s, ok := v.(*Set); ok {
+ v = s.List()
+ }
+
+ if !reflect.DeepEqual(v, tc.Value) {
+ t.Fatalf("Bad %s: \n%#v", tc.Name, v)
+ }
+ if ok != tc.Ok {
+ t.Fatalf("%s: expected ok: %t, got: %t", tc.Name, tc.Ok, ok)
+ }
+ })
+ }
+}
+
func TestResourceDataTimeout(t *testing.T) {
cases := []struct {
Name string
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/set.go b/vendor/github.com/hashicorp/terraform/helper/schema/set.go
index de05f40e..bb194ee6 100644
--- a/vendor/github.com/hashicorp/terraform/helper/schema/set.go
+++ b/vendor/github.com/hashicorp/terraform/helper/schema/set.go
@@ -153,6 +153,31 @@ func (s *Set) Equal(raw interface{}) bool {
return reflect.DeepEqual(s.m, other.m)
}
+// HashEqual simply checks to the keys the top-level map to the keys in the
+// other set's top-level map to see if they are equal. This obviously assumes
+// you have a properly working hash function - use HashResource if in doubt.
+func (s *Set) HashEqual(raw interface{}) bool {
+ other, ok := raw.(*Set)
+ if !ok {
+ return false
+ }
+
+ ks1 := make([]string, 0)
+ ks2 := make([]string, 0)
+
+ for k := range s.m {
+ ks1 = append(ks1, k)
+ }
+ for k := range other.m {
+ ks2 = append(ks2, k)
+ }
+
+ sort.Strings(ks1)
+ sort.Strings(ks2)
+
+ return reflect.DeepEqual(ks1, ks2)
+}
+
func (s *Set) GoString() string {
return fmt.Sprintf("*Set(%#v)", s.m)
}
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/set_test.go b/vendor/github.com/hashicorp/terraform/helper/schema/set_test.go
index 21f29295..edeeb37a 100644
--- a/vendor/github.com/hashicorp/terraform/helper/schema/set_test.go
+++ b/vendor/github.com/hashicorp/terraform/helper/schema/set_test.go
@@ -128,3 +128,90 @@ func TestHashResource_nil(t *testing.T) {
t.Fatalf("Expected 0 when hashing nil, given: %d", idx)
}
}
+
+func TestHashEqual(t *testing.T) {
+ nested := &Resource{
+ Schema: map[string]*Schema{
+ "foo": {
+ Type: TypeString,
+ Optional: true,
+ },
+ },
+ }
+ root := &Resource{
+ Schema: map[string]*Schema{
+ "bar": {
+ Type: TypeString,
+ Optional: true,
+ },
+ "nested": {
+ Type: TypeSet,
+ Optional: true,
+ Elem: nested,
+ },
+ },
+ }
+ n1 := map[string]interface{}{"foo": "bar"}
+ n2 := map[string]interface{}{"foo": "baz"}
+
+ r1 := map[string]interface{}{
+ "bar": "baz",
+ "nested": NewSet(HashResource(nested), []interface{}{n1}),
+ }
+ r2 := map[string]interface{}{
+ "bar": "qux",
+ "nested": NewSet(HashResource(nested), []interface{}{n2}),
+ }
+ r3 := map[string]interface{}{
+ "bar": "baz",
+ "nested": NewSet(HashResource(nested), []interface{}{n2}),
+ }
+ r4 := map[string]interface{}{
+ "bar": "qux",
+ "nested": NewSet(HashResource(nested), []interface{}{n1}),
+ }
+ s1 := NewSet(HashResource(root), []interface{}{r1})
+ s2 := NewSet(HashResource(root), []interface{}{r2})
+ s3 := NewSet(HashResource(root), []interface{}{r3})
+ s4 := NewSet(HashResource(root), []interface{}{r4})
+
+ cases := []struct {
+ name string
+ set *Set
+ compare *Set
+ expected bool
+ }{
+ {
+ name: "equal",
+ set: s1,
+ compare: s1,
+ expected: true,
+ },
+ {
+ name: "not equal",
+ set: s1,
+ compare: s2,
+ expected: false,
+ },
+ {
+ name: "outer equal, should still not be equal",
+ set: s1,
+ compare: s3,
+ expected: false,
+ },
+ {
+ name: "inner equal, should still not be equal",
+ set: s1,
+ compare: s4,
+ expected: false,
+ },
+ }
+ for _, tc := range cases {
+ t.Run(tc.name, func(t *testing.T) {
+ actual := tc.set.HashEqual(tc.compare)
+ if tc.expected != actual {
+ t.Fatalf("expected %t, got %t", tc.expected, actual)
+ }
+ })
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/plugin/client.go b/vendor/github.com/hashicorp/terraform/plugin/client.go
index 3a5cb7af..7e2f4fec 100644
--- a/vendor/github.com/hashicorp/terraform/plugin/client.go
+++ b/vendor/github.com/hashicorp/terraform/plugin/client.go
@@ -1,8 +1,10 @@
package plugin
import (
+ "os"
"os/exec"
+ hclog "github.com/hashicorp/go-hclog"
plugin "github.com/hashicorp/go-plugin"
"github.com/hashicorp/terraform/plugin/discovery"
)
@@ -10,11 +12,18 @@ import (
// ClientConfig returns a configuration object that can be used to instantiate
// a client for the plugin described by the given metadata.
func ClientConfig(m discovery.PluginMeta) *plugin.ClientConfig {
+ logger := hclog.New(&hclog.LoggerOptions{
+ Name: "plugin",
+ Level: hclog.Trace,
+ Output: os.Stderr,
+ })
+
return &plugin.ClientConfig{
Cmd: exec.Command(m.Path),
HandshakeConfig: Handshake,
Managed: true,
Plugins: PluginMap,
+ Logger: logger,
}
}
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go
index f5bc4c1c..10f8fce9 100644
--- a/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go
+++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go
@@ -59,7 +59,6 @@ func findPluginPaths(kind string, dirs []string) []string {
fullName := item.Name()
if !strings.HasPrefix(fullName, prefix) {
- log.Printf("[DEBUG] skipping %q, not a %s", fullName, kind)
continue
}
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go
index 241b5cb3..64d2b695 100644
--- a/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go
+++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go
@@ -16,6 +16,7 @@ import (
cleanhttp "github.com/hashicorp/go-cleanhttp"
getter "github.com/hashicorp/go-getter"
multierror "github.com/hashicorp/go-multierror"
+ "github.com/mitchellh/cli"
)
// Releases are located by parsing the html listing from releases.hashicorp.com.
@@ -58,6 +59,8 @@ type ProviderInstaller struct {
// Skip checksum and signature verification
SkipVerify bool
+
+ Ui cli.Ui // Ui for output
}
// Get is part of an implementation of type Installer, and attempts to download
@@ -116,6 +119,7 @@ func (i *ProviderInstaller) Get(provider string, req Constraints) (PluginMeta, e
log.Printf("[DEBUG] fetching provider info for %s version %s", provider, v)
if checkPlugin(url, i.PluginProtocolVersion) {
+ i.Ui.Info(fmt.Sprintf("- Downloading plugin for provider %q (%s)...", provider, v.String()))
log.Printf("[DEBUG] getting provider %q version %q at %s", provider, v, url)
err := getter.Get(i.Dir, url)
if err != nil {
@@ -422,3 +426,7 @@ func getFile(url string) ([]byte, error) {
}
return data, nil
}
+
+func GetReleaseHost() string {
+ return releaseHost
+}
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/get_test.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/get_test.go
index 16ba697c..65b2497a 100644
--- a/vendor/github.com/hashicorp/terraform/plugin/discovery/get_test.go
+++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/get_test.go
@@ -13,6 +13,8 @@ import (
"regexp"
"strings"
"testing"
+
+ "github.com/mitchellh/cli"
)
const testProviderFile = "test provider binary"
@@ -149,6 +151,7 @@ func TestProviderInstallerGet(t *testing.T) {
Dir: tmpDir,
PluginProtocolVersion: 5,
SkipVerify: true,
+ Ui: cli.NewMockUi(),
}
_, err = i.Get("test", AllVersions)
if err != ErrorNoVersionCompatible {
@@ -159,6 +162,7 @@ func TestProviderInstallerGet(t *testing.T) {
Dir: tmpDir,
PluginProtocolVersion: 3,
SkipVerify: true,
+ Ui: cli.NewMockUi(),
}
{
@@ -230,6 +234,7 @@ func TestProviderInstallerPurgeUnused(t *testing.T) {
Dir: tmpDir,
PluginProtocolVersion: 3,
SkipVerify: true,
+ Ui: cli.NewMockUi(),
}
purged, err := i.PurgeUnused(map[string]PluginMeta{
"test": PluginMeta{
diff --git a/vendor/github.com/hashicorp/terraform/plugins.go b/vendor/github.com/hashicorp/terraform/plugins.go
index bf239780..cf2d5425 100644
--- a/vendor/github.com/hashicorp/terraform/plugins.go
+++ b/vendor/github.com/hashicorp/terraform/plugins.go
@@ -1,8 +1,10 @@
package main
import (
+ "fmt"
"log"
"path/filepath"
+ "runtime"
)
// globalPluginDirs returns directories that should be searched for
@@ -18,7 +20,9 @@ func globalPluginDirs() []string {
if err != nil {
log.Printf("[ERROR] Error finding global config directory: %s", err)
} else {
+ machineDir := fmt.Sprintf("%s_%s", runtime.GOOS, runtime.GOARCH)
ret = append(ret, filepath.Join(dir, "plugins"))
+ ret = append(ret, filepath.Join(dir, "plugins", machineDir))
}
return ret
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/Dockerfile-release b/vendor/github.com/hashicorp/terraform/scripts/docker-release/Dockerfile-release
index f1600df7..4545d0a9 100644
--- a/vendor/github.com/hashicorp/terraform/scripts/docker-release/Dockerfile-release
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/Dockerfile-release
@@ -34,4 +34,6 @@ RUN echo Building image for Terraform ${TERRAFORM_VERSION} && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /bin && \
rm -f terraform_${TERRAFORM_VERSION}_linux_amd64.zip terraform_${TERRAFORM_VERSION}_SHA256SUMS*
+LABEL "com.hashicorp.terraform.version"="${TERRAFORM_VERSION}"
+
ENTRYPOINT ["/bin/terraform"]
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/README.md b/vendor/github.com/hashicorp/terraform/scripts/docker-release/README.md
index 2224aa6d..afcdfe4b 100644
--- a/vendor/github.com/hashicorp/terraform/scripts/docker-release/README.md
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/README.md
@@ -1,37 +1,77 @@
# Terraform Docker Release Build
-This directory contains configuration to drive the Dockerhub automated build
-for Terraform. This is different than the root Dockerfile (which produces
-the "full" image on Dockerhub) because it uses the release archives from
-releases.hashicorp.com. It is therefore not possible to use this configuration
-to build an image for a commit that hasn't been released.
+This directory contains configuration to drive the docker image releases for
+Terraform.
-## How it works
+Two different types of image are produced for each Terraform release:
-Dockerhub runs the `hooks/build` script to trigger the build. That uses
-`git describe` to identify the tag corresponding to the current `HEAD`. If
-the current commit _isn't_ tagged with a version number corresponding to
-a Terraform release already on releases.hashicorp.com, the build will fail.
+* A "light" image that includes just the release binary that should match
+ what's on releases.hashicorp.com.
-## What it produces
+* A "full" image that contains all of the Terraform source code and a binary
+ built from that source.
-This configuration is used to produce the "latest", "light", and "beta"
-tags in Dockerhub, as well as specific version tags.
+The latter can be produced for any arbitrary commit by running `docker build`
+in the root of this repository. The former requires that the release archive
+already be deployed on releases.hashicorp.com.
-* "latest" and "light" are synonyms, and are built from a branch in this
-repository called "stable".
-* "beta" is built from a branch called "beta".
+## Build and Release
-All of these branches should be updated only to _tagged_ commits, and only when
-it is desirable to create a new release image.
+The scripts in this directory are intended for running the steps to build,
+tag, and push the two images for a tagged and released version of Terraform.
+They expect to be run with git `HEAD` pointed at a release tag, whose name
+is used to determine the version to build. The version number indicated
+by the tag that `HEAD` is pointed at will be referred to below as
+the _current version_.
-## The `full` and `master` images image
+* `build.sh` builds locally both of the images for the current version.
+ This operates on the local docker daemon only, and produces tags that
+ include the current version number.
-This configuration does not produce the "full" image. That is instead produced
-by the `Dockerfile` in the repository root, driven by updates to the "stable"
-branch.
+* `tag.sh` updates the `latest`, `light` and `full` tags to refer to the
+ images for the current version, which must've been already produced by
+ an earlier run of `build.sh`. This operates on the local docker daemon
+ only.
-The "master" tag is updated for _every_ commit to the master branch of
-the Terraform core repository. It is not recommended to use these images for
-any production use, but they can be useful for testing bleeding-edge features
-that are not yet included in a release.
+* `push.sh` pushes the current version tag and the `latest`, `light` and
+ `full` tags up to dockerhub for public consumption. This writes images
+ to dockerhub, and so it requires docker credentials that have access to
+ write into the `hashicorp/terraform` repository.
+
+### Releasing a new "latest" version
+
+In the common case where a release is going to be considered the new latest
+stable version of Terraform, the helper script `release.sh` orchestrates
+all of the necessary steps to release to dockerhub:
+
+```
+$ git checkout v0.10.0
+$ scripts/docker-release/release.sh
+```
+
+Behind the scenes this script is running `build.sh`, `tag.sh` and `push.sh`
+as described above, with some extra confirmation steps to verify the
+correctness of the build.
+
+This script is interactive and so isn't suitable for running in automation.
+For automation, run the individual scripts directly.
+
+### Releasing a beta version or a patch to an earlier minor release
+
+The `release.sh` wrapper is not appropriate in two less common situations:
+
+* The version being released is a beta or other pre-release version, with
+ a version number like `v0.10.0-beta1` or `v0.10.0-rc1`.
+
+* The version being released belongs to a non-current minor release. For
+ example, if the current stable version is `v0.10.1` but the version
+ being released is `v0.9.14`.
+
+In both of these cases, only the specific version tag should be updated,
+which can be done as follows:
+
+```
+$ git checkout v0.11.0-beta1
+$ scripts/docker-release/build.sh
+$ docker push hashicorp/terraform:0.11.0-beta1
+```
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/build.sh b/vendor/github.com/hashicorp/terraform/scripts/docker-release/build.sh
new file mode 100755
index 00000000..8442e8a6
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/build.sh
@@ -0,0 +1,34 @@
+#!/usr/bin/env bash
+
+# This script builds two docker images for the version referred to by the
+# current git HEAD.
+#
+# After running this, run tag.sh if the images that are built should be
+# tagged as the "latest" release.
+
+set -eu
+
+BASE="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+cd "$BASE"
+
+if [ "$#" -eq 0 ]; then
+ # We assume that this is always running while git HEAD is pointed at a release
+ # tag or a branch that is pointed at the same commit as a release tag. If not,
+ # this will fail since we can't build a release image for a commit that hasn't
+ # actually been released.
+ VERSION="$(git describe)"
+else
+ # This mode is here only to support release.sh, which ensures that the given
+ # version matches the current git tag. Running this script manually with
+ # an argument can't guarantee correct behavior since the "full" image
+ # will be built against the current work tree regardless of which version
+ # is selected.
+ VERSION="$1"
+fi
+
+echo "-- Building release docker images for version $VERSION --"
+echo ""
+VERSION_SLUG="${VERSION#v}"
+
+docker build --no-cache "--build-arg=TERRAFORM_VERSION=${VERSION_SLUG}" -t hashicorp/terraform:${VERSION_SLUG} -f "Dockerfile-release" .
+docker build --no-cache -t "hashicorp/terraform:${VERSION_SLUG}-full" ../../
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/hooks/build b/vendor/github.com/hashicorp/terraform/scripts/docker-release/hooks/build
deleted file mode 100755
index faed92fb..00000000
--- a/vendor/github.com/hashicorp/terraform/scripts/docker-release/hooks/build
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/bin/bash
-
-# This script assumes that its working directory is the parent directory,
-# where the Dockerfile-release file is located, since that's how Dockerhub
-# runs hooks.
-
-set -eu
-
-# We assume that this is always running while git HEAD is pointed at a release
-# tag or a branch that is pointed at the same commit as a release tag. If not,
-# this will fail since we can't build a release image for a commit that hasn't
-# actually been released.
-VERSION="$(git describe)"
-
-echo "Building release docker images for version $VERSION"
-VERSION_SLUG="${VERSION#v}"
-
-docker build "--build-arg=TERRAFORM_VERSION=${VERSION_SLUG}" -t ${IMAGE_NAME} -f "Dockerfile-release" .
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/push.sh b/vendor/github.com/hashicorp/terraform/scripts/docker-release/push.sh
new file mode 100755
index 00000000..e65cd61b
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/push.sh
@@ -0,0 +1,20 @@
+#!/usr/bin/env bash
+
+# This script pushes the docker images for the given version of Terraform,
+# along with the "light", "full" and "latest" tags, up to docker hub.
+#
+# You must already be logged in to docker using "docker login" before running
+# this script.
+
+set -eu
+
+VERSION="$1"
+VERSION_SLUG="${VERSION#v}"
+
+echo "-- Pushing tags $VERSION_SLUG, light, full and latest up to dockerhub --"
+echo ""
+
+docker push "hashicorp/terraform:$VERSION_SLUG"
+docker push "hashicorp/terraform:light"
+docker push "hashicorp/terraform:full"
+docker push "hashicorp/terraform:latest"
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/release.sh b/vendor/github.com/hashicorp/terraform/scripts/docker-release/release.sh
new file mode 100755
index 00000000..a297748d
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/release.sh
@@ -0,0 +1,93 @@
+#!/usr/bin/env bash
+
+# This script is an interactive wrapper around the scripts build.sh, tag.sh
+# and push.sh intended for use during official Terraform releases.
+#
+# This script should be used only when git HEAD is pointing at the release tag
+# for what will become the new latest *stable* release, since it will update
+# the "latest", "light", and "full" tags to refer to what was built.
+#
+# To release a specific version without updating the various symbolic tags,
+# use build.sh directly and then manually push the single release tag it
+# creates. This is appropriate both when publishing a beta version and if,
+# for some reason, it's necessary to (re-)publish and older version.
+
+set -eu
+
+BASE="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
+cd "$BASE"
+
+# We assume that this is always running while git HEAD is pointed at a release
+# tag or a branch that is pointed at the same commit as a release tag. If not,
+# this will fail since we can't build a release image for a commit that hasn't
+# actually been released.
+VERSION="$(git describe)"
+VERSION_SLUG="${VERSION#v}"
+
+# Verify that the version is already deployed to releases.hashicorp.com.
+if curl --output /dev/null --silent --head --fail "https://releases.hashicorp.com/terraform/${VERSION_SLUG}/terraform_${VERSION_SLUG}_SHA256SUMS"; then
+ echo "===== Docker image release for Terraform $VERSION ====="
+ echo ""
+else
+ cat >&2 <<EOT
+
+There is no $VERSION release of Terraform on releases.hashicorp.com.
+
+release.sh can only create docker images for released versions. Use
+"git checkout {version}" to switch to a release tag before running this
+script.
+
+To create an untagged docker image for any arbitrary commit, use 'docker build'
+directly in the root of the Terraform repository.
+
+EOT
+ exit 1
+fi
+
+# Build the two images tagged with the version number
+./build.sh "$VERSION"
+
+# Verify that they were built correctly.
+echo "-- Testing $VERSION Images --"
+echo ""
+
+echo -n "light image version: "
+docker run --rm -e "CHECKPOINT_DISABLE=1" "hashicorp/terraform:${VERSION_SLUG}" version
+echo -n "full image version: "
+docker run --rm -e "CHECKPOINT_DISABLE=1" "hashicorp/terraform:${VERSION_SLUG}-full" version
+
+echo ""
+
+read -p "Did both images produce suitable version output for $VERSION? " -n 1 -r
+echo ""
+if ! [[ $REPLY =~ ^[Yy]$ ]]; then
+ echo >&2 Aborting due to inconsistent version output.
+ exit 1
+fi
+echo ""
+
+# Update the latest, light and full tags to point to the images we just built.
+./tag.sh "$VERSION"
+
+# Last chance to bail out
+echo "-- Prepare to Push --"
+echo ""
+echo "The following Terraform images are available locally:"
+docker images --format "{{.ID}}\t{{.Tag}}" hashicorp/terraform
+echo ""
+read -p "Ready to push the tags $VERSION_SLUG, light, full, and latest up to dockerhub? " -n 1 -r
+echo ""
+if ! [[ $REPLY =~ ^[Yy]$ ]]; then
+ echo >&2 "Aborting because reply wasn't positive."
+ exit 1
+fi
+echo ""
+
+# Actually upload the images
+./push.sh "$VERSION"
+
+echo ""
+echo "-- All done! --"
+echo ""
+echo "Confirm the release at https://hub.docker.com/r/hashicorp/terraform/tags/"
+echo ""
diff --git a/vendor/github.com/hashicorp/terraform/scripts/docker-release/tag.sh b/vendor/github.com/hashicorp/terraform/scripts/docker-release/tag.sh
new file mode 100755
index 00000000..88bd95f7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/scripts/docker-release/tag.sh
@@ -0,0 +1,26 @@
+#!/usr/bin/env bash
+
+# This script tags the version number given on the command line as being
+# the "latest" on the local system only.
+#
+# The following tags are updated:
+# - light (from the tag named after the version number)
+# - full (from the tag named after the version number with "-full" appended)
+# - latest (as an alias of light)
+#
+# Before running this the build.sh script must be run to actually create the
+# images that this script will tag.
+#
+# After tagging, use push.sh to push the images to dockerhub.
+
+set -eu
+
+VERSION="$1"
+VERSION_SLUG="${VERSION#v}"
+
+echo "-- Updating tags to point to version $VERSION --"
+echo ""
+
+docker tag "hashicorp/terraform:${VERSION_SLUG}" "hashicorp/terraform:light"
+docker tag "hashicorp/terraform:${VERSION_SLUG}" "hashicorp/terraform:latest"
+docker tag "hashicorp/terraform:${VERSION_SLUG}-full" "hashicorp/terraform:full"
diff --git a/vendor/github.com/hashicorp/terraform/state/remote/state.go b/vendor/github.com/hashicorp/terraform/state/remote/state.go
index 8e157101..575e4d18 100644
--- a/vendor/github.com/hashicorp/terraform/state/remote/state.go
+++ b/vendor/github.com/hashicorp/terraform/state/remote/state.go
@@ -2,7 +2,7 @@ package remote
import (
"bytes"
- "fmt"
+ "log"
"sync"
"github.com/hashicorp/terraform/state"
@@ -35,7 +35,10 @@ func (s *State) WriteState(state *terraform.State) error {
defer s.mu.Unlock()
if s.readState != nil && !state.SameLineage(s.readState) {
- return fmt.Errorf("incompatible state lineage; given %s but want %s", state.Lineage, s.readState.Lineage)
+ // This can't error here, because we need to be able to overwrite the
+ // state in some cases, like `state push -force` or `workspace new
+ // -state=`
+ log.Printf("[WARN] incompatible state lineage; given %s but want %s", state.Lineage, s.readState.Lineage)
}
// We create a deep copy of the state here, because the caller also has
diff --git a/vendor/github.com/hashicorp/terraform/terraform/context_input_test.go b/vendor/github.com/hashicorp/terraform/terraform/context_input_test.go
index 928c1147..750db918 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/context_input_test.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/context_input_test.go
@@ -719,3 +719,57 @@ func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) {
t.Fatalf("err: %s", err)
}
}
+
+// In this case, a module variable can't be resolved from a data source until
+// it's refreshed, but it can't be refreshed during Input.
+func TestContext2Input_dataSourceRequiresRefresh(t *testing.T) {
+ input := new(MockUIInput)
+ p := testProvider("null")
+ m := testModule(t, "input-module-data-vars")
+
+ p.ReadDataDiffFn = testDataDiffFn
+
+ state := &State{
+ Modules: []*ModuleState{
+ &ModuleState{
+ Path: rootModulePath,
+ Resources: map[string]*ResourceState{
+ "data.null_data_source.bar": &ResourceState{
+ Type: "null_data_source",
+ Primary: &InstanceState{
+ ID: "-",
+ Attributes: map[string]string{
+ "foo.#": "1",
+ "foo.0": "a",
+ // foo.1 exists in the data source, but needs to be refreshed.
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+
+ ctx := testContext2(t, &ContextOpts{
+ Module: m,
+ ProviderResolver: ResourceProviderResolverFixed(
+ map[string]ResourceProviderFactory{
+ "null": testProviderFuncFixed(p),
+ },
+ ),
+ State: state,
+ UIInput: input,
+ })
+
+ if err := ctx.Input(InputModeStd); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+
+ // ensure that plan works after Refresh
+ if _, err := ctx.Refresh(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+ if _, err := ctx.Plan(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/terraform/context_validate_test.go b/vendor/github.com/hashicorp/terraform/terraform/context_validate_test.go
index 4e9ab910..60cef142 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/context_validate_test.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/context_validate_test.go
@@ -1001,7 +1001,9 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) {
Providers: c.components.ResourceProviders(),
Targets: c.targets,
}).Build(RootModulePath)
-
+ if err != nil {
+ t.Fatalf("error attmepting to Build PlanGraphBuilder: %s", err)
+ }
defer c.acquireRun("validate-test")()
walker, err := c.walk(graph, graph, walkValidate)
if err != nil {
diff --git a/vendor/github.com/hashicorp/terraform/terraform/eval.go b/vendor/github.com/hashicorp/terraform/terraform/eval.go
index 3cb088a2..10d9c228 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/eval.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/eval.go
@@ -49,11 +49,11 @@ func EvalRaw(n EvalNode, ctx EvalContext) (interface{}, error) {
path = strings.Join(ctx.Path(), ".")
}
- log.Printf("[DEBUG] %s: eval: %T", path, n)
+ log.Printf("[TRACE] %s: eval: %T", path, n)
output, err := n.Eval(ctx)
if err != nil {
if _, ok := err.(EvalEarlyExitError); ok {
- log.Printf("[DEBUG] %s: eval: %T, err: %s", path, n, err)
+ log.Printf("[TRACE] %s: eval: %T, err: %s", path, n, err)
} else {
log.Printf("[ERROR] %s: eval: %T, err: %s", path, n, err)
}
diff --git a/vendor/github.com/hashicorp/terraform/terraform/eval_interpolate.go b/vendor/github.com/hashicorp/terraform/terraform/eval_interpolate.go
index 6825ff59..df3bcb98 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/eval_interpolate.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/eval_interpolate.go
@@ -1,6 +1,10 @@
package terraform
-import "github.com/hashicorp/terraform/config"
+import (
+ "log"
+
+ "github.com/hashicorp/terraform/config"
+)
// EvalInterpolate is an EvalNode implementation that takes a raw
// configuration and interpolates it.
@@ -22,3 +26,28 @@ func (n *EvalInterpolate) Eval(ctx EvalContext) (interface{}, error) {
return nil, nil
}
+
+// EvalTryInterpolate is an EvalNode implementation that takes a raw
+// configuration and interpolates it, but only logs a warning on an
+// interpolation error, and stops further Eval steps.
+// This is used during Input where a value may not be known before Refresh, but
+// we don't want to block Input.
+type EvalTryInterpolate struct {
+ Config *config.RawConfig
+ Resource *Resource
+ Output **ResourceConfig
+}
+
+func (n *EvalTryInterpolate) Eval(ctx EvalContext) (interface{}, error) {
+ rc, err := ctx.Interpolate(n.Config, n.Resource)
+ if err != nil {
+ log.Printf("[WARN] Interpolation %q failed: %s", n.Config.Key, err)
+ return nil, EvalEarlyExitError{}
+ }
+
+ if n.Output != nil {
+ *n.Output = rc
+ }
+
+ return nil, nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/terraform/eval_state.go b/vendor/github.com/hashicorp/terraform/terraform/eval_state.go
index 126a0e63..1f67e3d8 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/eval_state.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/eval_state.go
@@ -1,6 +1,8 @@
package terraform
-import "fmt"
+import (
+ "fmt"
+)
// EvalReadState is an EvalNode implementation that reads the
// primary InstanceState for a specific resource out of the state.
diff --git a/vendor/github.com/hashicorp/terraform/terraform/graph.go b/vendor/github.com/hashicorp/terraform/terraform/graph.go
index 48ce6a33..735ec4ec 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/graph.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/graph.go
@@ -70,7 +70,7 @@ func (g *Graph) walk(walker GraphWalker) error {
// Walk the graph.
var walkFn dag.WalkFunc
walkFn = func(v dag.Vertex) (rerr error) {
- log.Printf("[DEBUG] vertex '%s.%s': walking", path, dag.VertexName(v))
+ log.Printf("[TRACE] vertex '%s.%s': walking", path, dag.VertexName(v))
g.DebugVisitInfo(v, g.debugName)
// If we have a panic wrap GraphWalker and a panic occurs, recover
@@ -118,7 +118,7 @@ func (g *Graph) walk(walker GraphWalker) error {
// Allow the walker to change our tree if needed. Eval,
// then callback with the output.
- log.Printf("[DEBUG] vertex '%s.%s': evaluating", path, dag.VertexName(v))
+ log.Printf("[TRACE] vertex '%s.%s': evaluating", path, dag.VertexName(v))
g.DebugVertexInfo(v, fmt.Sprintf("evaluating %T(%s)", v, path))
@@ -132,7 +132,7 @@ func (g *Graph) walk(walker GraphWalker) error {
// If the node is dynamically expanded, then expand it
if ev, ok := v.(GraphNodeDynamicExpandable); ok {
log.Printf(
- "[DEBUG] vertex '%s.%s': expanding/walking dynamic subgraph",
+ "[TRACE] vertex '%s.%s': expanding/walking dynamic subgraph",
path,
dag.VertexName(v))
@@ -154,7 +154,7 @@ func (g *Graph) walk(walker GraphWalker) error {
// If the node has a subgraph, then walk the subgraph
if sn, ok := v.(GraphNodeSubgraph); ok {
log.Printf(
- "[DEBUG] vertex '%s.%s': walking subgraph",
+ "[TRACE] vertex '%s.%s': walking subgraph",
path,
dag.VertexName(v))
diff --git a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_input.go b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_input.go
index 0df48cdb..10fd8b1e 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_input.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_input.go
@@ -10,6 +10,9 @@ import (
// and is based on the PlanGraphBuilder. The PlanGraphBuilder passed in will be
// modified and should not be used for any other operations.
func InputGraphBuilder(p *PlanGraphBuilder) GraphBuilder {
+ // convert this to an InputPlan
+ p.Input = true
+
// We're going to customize the concrete functions
p.CustomConcrete = true
diff --git a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go
index 4b29bbb4..9c7e4c1d 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go
@@ -40,6 +40,9 @@ type PlanGraphBuilder struct {
// Validate will do structural validation of the graph.
Validate bool
+ // Input represents that this builder is for an Input operation.
+ Input bool
+
// CustomConcrete can be set to customize the node types created
// for various parts of the plan. This is useful in order to customize
// the plan behavior.
@@ -107,7 +110,10 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer {
),
// Add module variables
- &ModuleVariableTransformer{Module: b.Module},
+ &ModuleVariableTransformer{
+ Module: b.Module,
+ Input: b.Input,
+ },
// Connect so that the references are ready for targeting. We'll
// have to connect again later for providers and so on.
diff --git a/vendor/github.com/hashicorp/terraform/terraform/node_data_refresh_test.go b/vendor/github.com/hashicorp/terraform/terraform/node_data_refresh_test.go
index 6aa3af37..d58739f3 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/node_data_refresh_test.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/node_data_refresh_test.go
@@ -55,6 +55,9 @@ func TestNodeRefreshableDataResourceDynamicExpand_scaleOut(t *testing.T) {
StateState: state,
StateLock: &stateLock,
})
+ if err != nil {
+ t.Fatalf("error on DynamicExpand: %s", err)
+ }
actual := g.StringWithNodeTypes()
expected := `data.aws_instance.foo[0] - *terraform.NodeRefreshableDataResourceInstance
@@ -136,7 +139,9 @@ func TestNodeRefreshableDataResourceDynamicExpand_scaleIn(t *testing.T) {
StateState: state,
StateLock: &stateLock,
})
-
+ if err != nil {
+ t.Fatalf("error on DynamicExpand: %s", err)
+ }
actual := g.StringWithNodeTypes()
expected := `data.aws_instance.foo[0] - *terraform.NodeRefreshableDataResourceInstance
data.aws_instance.foo[1] - *terraform.NodeRefreshableDataResourceInstance
diff --git a/vendor/github.com/hashicorp/terraform/terraform/node_module_variable.go b/vendor/github.com/hashicorp/terraform/terraform/node_module_variable.go
index 13fe8fc3..63b84a9c 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/node_module_variable.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/node_module_variable.go
@@ -15,6 +15,9 @@ type NodeApplyableModuleVariable struct {
Value *config.RawConfig // Value is the value that is set
Module *module.Tree // Antiquated, want to remove
+
+ // Input is set if this graph was created for the Input operation.
+ Input bool
}
func (n *NodeApplyableModuleVariable) Name() string {
@@ -92,12 +95,24 @@ func (n *NodeApplyableModuleVariable) EvalTree() EvalNode {
// within the variables mapping.
var config *ResourceConfig
variables := make(map[string]interface{})
+
+ var interpolate EvalNode
+
+ if n.Input {
+ interpolate = &EvalTryInterpolate{
+ Config: n.Value,
+ Output: &config,
+ }
+ } else {
+ interpolate = &EvalInterpolate{
+ Config: n.Value,
+ Output: &config,
+ }
+ }
+
return &EvalSequence{
Nodes: []EvalNode{
- &EvalInterpolate{
- Config: n.Value,
- Output: &config,
- },
+ interpolate,
&EvalVariableBlock{
Config: &config,
diff --git a/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh_test.go b/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh_test.go
index 2c9f6921..b4f77ca6 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh_test.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh_test.go
@@ -58,6 +58,9 @@ func TestNodeRefreshableManagedResourceDynamicExpand_scaleOut(t *testing.T) {
StateState: state,
StateLock: &stateLock,
})
+ if err != nil {
+ t.Fatalf("error attempting DynamicExpand: %s", err)
+ }
actual := g.StringWithNodeTypes()
expected := `aws_instance.foo[0] - *terraform.NodeRefreshableManagedResourceInstance
@@ -139,7 +142,9 @@ func TestNodeRefreshableManagedResourceDynamicExpand_scaleIn(t *testing.T) {
StateState: state,
StateLock: &stateLock,
})
-
+ if err != nil {
+ t.Fatalf("error attempting DynamicExpand: %s", err)
+ }
actual := g.StringWithNodeTypes()
expected := `aws_instance.foo[0] - *terraform.NodeRefreshableManagedResourceInstance
aws_instance.foo[1] - *terraform.NodeRefreshableManagedResourceInstance
diff --git a/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/child/main.tf b/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/child/main.tf
new file mode 100644
index 00000000..aa5d69bd
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/child/main.tf
@@ -0,0 +1,5 @@
+variable "in" {}
+
+output "out" {
+ value = "${var.in}"
+}
diff --git a/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/main.tf b/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/main.tf
new file mode 100644
index 00000000..0a327b10
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/terraform/test-fixtures/input-module-data-vars/main.tf
@@ -0,0 +1,8 @@
+data "null_data_source" "bar" {
+ foo = ["a", "b"]
+}
+
+module "child" {
+ source = "./child"
+ in = "${data.null_data_source.bar.foo[1]}"
+}
diff --git a/vendor/github.com/hashicorp/terraform/terraform/transform_module_variable.go b/vendor/github.com/hashicorp/terraform/terraform/transform_module_variable.go
index 467950bd..dbfd1687 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/transform_module_variable.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/transform_module_variable.go
@@ -17,6 +17,7 @@ type ModuleVariableTransformer struct {
Module *module.Tree
DisablePrune bool // True if pruning unreferenced should be disabled
+ Input bool // True if this is from an Input operation.
}
func (t *ModuleVariableTransformer) Transform(g *Graph) error {
@@ -99,6 +100,7 @@ func (t *ModuleVariableTransformer) transformSingle(g *Graph, parent, m *module.
Config: v,
Value: value,
Module: t.Module,
+ Input: t.Input,
}
if !t.DisablePrune {
diff --git a/vendor/github.com/hashicorp/terraform/terraform/upgrade_state_v1_test.go b/vendor/github.com/hashicorp/terraform/terraform/upgrade_state_v1_test.go
index 405cba94..93e03acc 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/upgrade_state_v1_test.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/upgrade_state_v1_test.go
@@ -81,6 +81,10 @@ func TestReadUpgradeStateV1toV3_emptyState(t *testing.T) {
}
stateV2, err := upgradeStateV1ToV2(orig)
+ if err != nil {
+ t.Fatalf("error attempting upgradeStateV1ToV2: %s", err)
+ }
+
for _, m := range stateV2.Modules {
if m.Resources == nil {
t.Fatal("V1 to V2 upgrade lost module.Resources")
@@ -91,6 +95,9 @@ func TestReadUpgradeStateV1toV3_emptyState(t *testing.T) {
}
stateV3, err := upgradeStateV2ToV3(stateV2)
+ if err != nil {
+ t.Fatalf("error attempting to upgradeStateV2ToV3: %s", err)
+ }
for _, m := range stateV3.Modules {
if m.Resources == nil {
t.Fatal("V2 to V3 upgrade lost module.Resources")
diff --git a/vendor/github.com/hashicorp/terraform/terraform/version.go b/vendor/github.com/hashicorp/terraform/terraform/version.go
index d61b11ea..de9296a2 100644
--- a/vendor/github.com/hashicorp/terraform/terraform/version.go
+++ b/vendor/github.com/hashicorp/terraform/terraform/version.go
@@ -7,12 +7,12 @@ import (
)
// The main version number that is being run at the moment.
-const Version = "0.10.0"
+const Version = "0.10.2"
// A pre-release marker for the version. If this is "" (empty string)
// then it means that it is a final release. Otherwise, this is a pre-release
// such as "dev" (in development), "beta", "rc1", etc.
-var VersionPrerelease = "dev"
+var VersionPrerelease = ""
// SemVersion is an instance of version.Version. This has the secondary
// benefit of verifying during tests and init time that our version is a
diff --git a/vendor/github.com/hashicorp/terraform/tools/terraform-bundle/package.go b/vendor/github.com/hashicorp/terraform/tools/terraform-bundle/package.go
index 2f0f33a8..6c7ee51e 100644
--- a/vendor/github.com/hashicorp/terraform/tools/terraform-bundle/package.go
+++ b/vendor/github.com/hashicorp/terraform/tools/terraform-bundle/package.go
@@ -19,8 +19,6 @@ import (
"github.com/mitchellh/cli"
)
-const releasesBaseURL = "https://releases.hashicorp.com"
-
type PackageCommand struct {
ui cli.Ui
}
@@ -91,18 +89,23 @@ func (c *PackageCommand) Run(args []string) int {
OS: osName,
Arch: archName,
+ Ui: c.ui,
+ }
+
+ if len(config.Providers) > 0 {
+ c.ui.Output(fmt.Sprintf("Checking for available provider plugins on %s...",
+ discovery.GetReleaseHost()))
}
for name, constraints := range config.Providers {
- c.ui.Info(fmt.Sprintf("Fetching provider %q...", name))
for _, constraint := range constraints {
- meta, err := installer.Get(name, constraint.MustParse())
+ c.ui.Output(fmt.Sprintf("- Resolving %q provider (%s)...",
+ name, constraint))
+ _, err := installer.Get(name, constraint.MustParse())
if err != nil {
- c.ui.Error(fmt.Sprintf("Failed to resolve %s provider %s: %s", name, constraint, err))
+ c.ui.Error(fmt.Sprintf("- Failed to resolve %s provider %s: %s", name, constraint, err))
return 1
}
-
- c.ui.Info(fmt.Sprintf("- %q resolved to %s", constraint, meta.Version))
}
}
@@ -183,7 +186,7 @@ func (c *PackageCommand) bundleFilename(version discovery.VersionStr, time time.
func (c *PackageCommand) coreURL(version discovery.VersionStr, osName, archName string) string {
return fmt.Sprintf(
"%s/terraform/%s/terraform_%s_%s_%s.zip",
- releasesBaseURL, version, version, osName, archName,
+ discovery.GetReleaseHost(), version, version, osName, archName,
)
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/LICENSE
deleted file mode 100644
index 106569e5..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/LICENSE
+++ /dev/null
@@ -1,20 +0,0 @@
-The MIT License (MIT)
-
-Copyright (c) 2013 Armon Dadgar
-
-Permission is hereby granted, free of charge, to any person obtaining a copy of
-this software and associated documentation files (the "Software"), to deal in
-the Software without restriction, including without limitation the rights to
-use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
-the Software, and to permit persons to whom the Software is furnished to do so,
-subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
-FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
-COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
-IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/README.md
deleted file mode 100644
index a7399cdd..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/README.md
+++ /dev/null
@@ -1,74 +0,0 @@
-go-metrics
-==========
-
-This library provides a `metrics` package which can be used to instrument code,
-expose application metrics, and profile runtime performance in a flexible manner.
-
-Current API: [![GoDoc](https://godoc.org/github.com/armon/go-metrics?status.svg)](https://godoc.org/github.com/armon/go-metrics)
-
-Sinks
-=====
-
-The `metrics` package makes use of a `MetricSink` interface to support delivery
-to any type of backend. Currently the following sinks are provided:
-
-* StatsiteSink : Sinks to a [statsite](https://github.com/armon/statsite/) instance (TCP)
-* StatsdSink: Sinks to a [StatsD](https://github.com/etsy/statsd/) / statsite instance (UDP)
-* PrometheusSink: Sinks to a [Prometheus](http://prometheus.io/) metrics endpoint (exposed via HTTP for scrapes)
-* InmemSink : Provides in-memory aggregation, can be used to export stats
-* FanoutSink : Sinks to multiple sinks. Enables writing to multiple statsite instances for example.
-* BlackholeSink : Sinks to nowhere
-
-In addition to the sinks, the `InmemSignal` can be used to catch a signal,
-and dump a formatted output of recent metrics. For example, when a process gets
-a SIGUSR1, it can dump to stderr recent performance metrics for debugging.
-
-Examples
-========
-
-Here is an example of using the package:
-
-```go
-func SlowMethod() {
- // Profiling the runtime of a method
- defer metrics.MeasureSince([]string{"SlowMethod"}, time.Now())
-}
-
-// Configure a statsite sink as the global metrics sink
-sink, _ := metrics.NewStatsiteSink("statsite:8125")
-metrics.NewGlobal(metrics.DefaultConfig("service-name"), sink)
-
-// Emit a Key/Value pair
-metrics.EmitKey([]string{"questions", "meaning of life"}, 42)
-```
-
-Here is an example of setting up a signal handler:
-
-```go
-// Setup the inmem sink and signal handler
-inm := metrics.NewInmemSink(10*time.Second, time.Minute)
-sig := metrics.DefaultInmemSignal(inm)
-metrics.NewGlobal(metrics.DefaultConfig("service-name"), inm)
-
-// Run some code
-inm.SetGauge([]string{"foo"}, 42)
-inm.EmitKey([]string{"bar"}, 30)
-
-inm.IncrCounter([]string{"baz"}, 42)
-inm.IncrCounter([]string{"baz"}, 1)
-inm.IncrCounter([]string{"baz"}, 80)
-
-inm.AddSample([]string{"method", "wow"}, 42)
-inm.AddSample([]string{"method", "wow"}, 100)
-inm.AddSample([]string{"method", "wow"}, 22)
-
-....
-```
-
-When a signal comes in, output like the following will be dumped to stderr:
-
- [2014-01-28 14:57:33.04 -0800 PST][G] 'foo': 42.000
- [2014-01-28 14:57:33.04 -0800 PST][P] 'bar': 30.000
- [2014-01-28 14:57:33.04 -0800 PST][C] 'baz': Count: 3 Min: 1.000 Mean: 41.000 Max: 80.000 Stddev: 39.509
- [2014-01-28 14:57:33.04 -0800 PST][S] 'method.wow': Count: 3 Min: 22.000 Mean: 54.667 Max: 100.000 Stddev: 40.513
-
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_unix.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_unix.go
deleted file mode 100644
index 31098dd5..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_unix.go
+++ /dev/null
@@ -1,12 +0,0 @@
-// +build !windows
-
-package metrics
-
-import (
- "syscall"
-)
-
-const (
- // DefaultSignal is used with DefaultInmemSignal
- DefaultSignal = syscall.SIGUSR1
-)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_windows.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_windows.go
deleted file mode 100644
index 38136af3..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/const_windows.go
+++ /dev/null
@@ -1,13 +0,0 @@
-// +build windows
-
-package metrics
-
-import (
- "syscall"
-)
-
-const (
- // DefaultSignal is used with DefaultInmemSignal
- // Windows has no SIGUSR1, use SIGBREAK
- DefaultSignal = syscall.Signal(21)
-)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem.go
deleted file mode 100644
index 83fb6bba..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem.go
+++ /dev/null
@@ -1,247 +0,0 @@
-package metrics
-
-import (
- "fmt"
- "math"
- "strings"
- "sync"
- "time"
-)
-
-// InmemSink provides a MetricSink that does in-memory aggregation
-// without sending metrics over a network. It can be embedded within
-// an application to provide profiling information.
-type InmemSink struct {
- // How long is each aggregation interval
- interval time.Duration
-
- // Retain controls how many metrics interval we keep
- retain time.Duration
-
- // maxIntervals is the maximum length of intervals.
- // It is retain / interval.
- maxIntervals int
-
- // intervals is a slice of the retained intervals
- intervals []*IntervalMetrics
- intervalLock sync.RWMutex
-
- rateDenom float64
-}
-
-// IntervalMetrics stores the aggregated metrics
-// for a specific interval
-type IntervalMetrics struct {
- sync.RWMutex
-
- // The start time of the interval
- Interval time.Time
-
- // Gauges maps the key to the last set value
- Gauges map[string]float32
-
- // Points maps the string to the list of emitted values
- // from EmitKey
- Points map[string][]float32
-
- // Counters maps the string key to a sum of the counter
- // values
- Counters map[string]*AggregateSample
-
- // Samples maps the key to an AggregateSample,
- // which has the rolled up view of a sample
- Samples map[string]*AggregateSample
-}
-
-// NewIntervalMetrics creates a new IntervalMetrics for a given interval
-func NewIntervalMetrics(intv time.Time) *IntervalMetrics {
- return &IntervalMetrics{
- Interval: intv,
- Gauges: make(map[string]float32),
- Points: make(map[string][]float32),
- Counters: make(map[string]*AggregateSample),
- Samples: make(map[string]*AggregateSample),
- }
-}
-
-// AggregateSample is used to hold aggregate metrics
-// about a sample
-type AggregateSample struct {
- Count int // The count of emitted pairs
- Rate float64 // The count of emitted pairs per time unit (usually 1 second)
- Sum float64 // The sum of values
- SumSq float64 // The sum of squared values
- Min float64 // Minimum value
- Max float64 // Maximum value
- LastUpdated time.Time // When value was last updated
-}
-
-// Computes a Stddev of the values
-func (a *AggregateSample) Stddev() float64 {
- num := (float64(a.Count) * a.SumSq) - math.Pow(a.Sum, 2)
- div := float64(a.Count * (a.Count - 1))
- if div == 0 {
- return 0
- }
- return math.Sqrt(num / div)
-}
-
-// Computes a mean of the values
-func (a *AggregateSample) Mean() float64 {
- if a.Count == 0 {
- return 0
- }
- return a.Sum / float64(a.Count)
-}
-
-// Ingest is used to update a sample
-func (a *AggregateSample) Ingest(v float64, rateDenom float64) {
- a.Count++
- a.Sum += v
- a.SumSq += (v * v)
- if v < a.Min || a.Count == 1 {
- a.Min = v
- }
- if v > a.Max || a.Count == 1 {
- a.Max = v
- }
- a.Rate = float64(a.Count)/rateDenom
- a.LastUpdated = time.Now()
-}
-
-func (a *AggregateSample) String() string {
- if a.Count == 0 {
- return "Count: 0"
- } else if a.Stddev() == 0 {
- return fmt.Sprintf("Count: %d Sum: %0.3f LastUpdated: %s", a.Count, a.Sum, a.LastUpdated)
- } else {
- return fmt.Sprintf("Count: %d Min: %0.3f Mean: %0.3f Max: %0.3f Stddev: %0.3f Sum: %0.3f LastUpdated: %s",
- a.Count, a.Min, a.Mean(), a.Max, a.Stddev(), a.Sum, a.LastUpdated)
- }
-}
-
-// NewInmemSink is used to construct a new in-memory sink.
-// Uses an aggregation interval and maximum retention period.
-func NewInmemSink(interval, retain time.Duration) *InmemSink {
- rateTimeUnit := time.Second
- i := &InmemSink{
- interval: interval,
- retain: retain,
- maxIntervals: int(retain / interval),
- rateDenom: float64(interval.Nanoseconds()) / float64(rateTimeUnit.Nanoseconds()),
- }
- i.intervals = make([]*IntervalMetrics, 0, i.maxIntervals)
- return i
-}
-
-func (i *InmemSink) SetGauge(key []string, val float32) {
- k := i.flattenKey(key)
- intv := i.getInterval()
-
- intv.Lock()
- defer intv.Unlock()
- intv.Gauges[k] = val
-}
-
-func (i *InmemSink) EmitKey(key []string, val float32) {
- k := i.flattenKey(key)
- intv := i.getInterval()
-
- intv.Lock()
- defer intv.Unlock()
- vals := intv.Points[k]
- intv.Points[k] = append(vals, val)
-}
-
-func (i *InmemSink) IncrCounter(key []string, val float32) {
- k := i.flattenKey(key)
- intv := i.getInterval()
-
- intv.Lock()
- defer intv.Unlock()
-
- agg := intv.Counters[k]
- if agg == nil {
- agg = &AggregateSample{}
- intv.Counters[k] = agg
- }
- agg.Ingest(float64(val), i.rateDenom)
-}
-
-func (i *InmemSink) AddSample(key []string, val float32) {
- k := i.flattenKey(key)
- intv := i.getInterval()
-
- intv.Lock()
- defer intv.Unlock()
-
- agg := intv.Samples[k]
- if agg == nil {
- agg = &AggregateSample{}
- intv.Samples[k] = agg
- }
- agg.Ingest(float64(val), i.rateDenom)
-}
-
-// Data is used to retrieve all the aggregated metrics
-// Intervals may be in use, and a read lock should be acquired
-func (i *InmemSink) Data() []*IntervalMetrics {
- // Get the current interval, forces creation
- i.getInterval()
-
- i.intervalLock.RLock()
- defer i.intervalLock.RUnlock()
-
- intervals := make([]*IntervalMetrics, len(i.intervals))
- copy(intervals, i.intervals)
- return intervals
-}
-
-func (i *InmemSink) getExistingInterval(intv time.Time) *IntervalMetrics {
- i.intervalLock.RLock()
- defer i.intervalLock.RUnlock()
-
- n := len(i.intervals)
- if n > 0 && i.intervals[n-1].Interval == intv {
- return i.intervals[n-1]
- }
- return nil
-}
-
-func (i *InmemSink) createInterval(intv time.Time) *IntervalMetrics {
- i.intervalLock.Lock()
- defer i.intervalLock.Unlock()
-
- // Check for an existing interval
- n := len(i.intervals)
- if n > 0 && i.intervals[n-1].Interval == intv {
- return i.intervals[n-1]
- }
-
- // Add the current interval
- current := NewIntervalMetrics(intv)
- i.intervals = append(i.intervals, current)
- n++
-
- // Truncate the intervals if they are too long
- if n >= i.maxIntervals {
- copy(i.intervals[0:], i.intervals[n-i.maxIntervals:])
- i.intervals = i.intervals[:i.maxIntervals]
- }
- return current
-}
-
-// getInterval returns the current interval to write to
-func (i *InmemSink) getInterval() *IntervalMetrics {
- intv := time.Now().Truncate(i.interval)
- if m := i.getExistingInterval(intv); m != nil {
- return m
- }
- return i.createInterval(intv)
-}
-
-// Flattens the key for formatting, removes spaces
-func (i *InmemSink) flattenKey(parts []string) string {
- joined := strings.Join(parts, ".")
- return strings.Replace(joined, " ", "_", -1)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem_signal.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem_signal.go
deleted file mode 100644
index 95d08ee1..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/inmem_signal.go
+++ /dev/null
@@ -1,100 +0,0 @@
-package metrics
-
-import (
- "bytes"
- "fmt"
- "io"
- "os"
- "os/signal"
- "sync"
- "syscall"
-)
-
-// InmemSignal is used to listen for a given signal, and when received,
-// to dump the current metrics from the InmemSink to an io.Writer
-type InmemSignal struct {
- signal syscall.Signal
- inm *InmemSink
- w io.Writer
- sigCh chan os.Signal
-
- stop bool
- stopCh chan struct{}
- stopLock sync.Mutex
-}
-
-// NewInmemSignal creates a new InmemSignal which listens for a given signal,
-// and dumps the current metrics out to a writer
-func NewInmemSignal(inmem *InmemSink, sig syscall.Signal, w io.Writer) *InmemSignal {
- i := &InmemSignal{
- signal: sig,
- inm: inmem,
- w: w,
- sigCh: make(chan os.Signal, 1),
- stopCh: make(chan struct{}),
- }
- signal.Notify(i.sigCh, sig)
- go i.run()
- return i
-}
-
-// DefaultInmemSignal returns a new InmemSignal that responds to SIGUSR1
-// and writes output to stderr. Windows uses SIGBREAK
-func DefaultInmemSignal(inmem *InmemSink) *InmemSignal {
- return NewInmemSignal(inmem, DefaultSignal, os.Stderr)
-}
-
-// Stop is used to stop the InmemSignal from listening
-func (i *InmemSignal) Stop() {
- i.stopLock.Lock()
- defer i.stopLock.Unlock()
-
- if i.stop {
- return
- }
- i.stop = true
- close(i.stopCh)
- signal.Stop(i.sigCh)
-}
-
-// run is a long running routine that handles signals
-func (i *InmemSignal) run() {
- for {
- select {
- case <-i.sigCh:
- i.dumpStats()
- case <-i.stopCh:
- return
- }
- }
-}
-
-// dumpStats is used to dump the data to output writer
-func (i *InmemSignal) dumpStats() {
- buf := bytes.NewBuffer(nil)
-
- data := i.inm.Data()
- // Skip the last period which is still being aggregated
- for i := 0; i < len(data)-1; i++ {
- intv := data[i]
- intv.RLock()
- for name, val := range intv.Gauges {
- fmt.Fprintf(buf, "[%v][G] '%s': %0.3f\n", intv.Interval, name, val)
- }
- for name, vals := range intv.Points {
- for _, val := range vals {
- fmt.Fprintf(buf, "[%v][P] '%s': %0.3f\n", intv.Interval, name, val)
- }
- }
- for name, agg := range intv.Counters {
- fmt.Fprintf(buf, "[%v][C] '%s': %s\n", intv.Interval, name, agg)
- }
- for name, agg := range intv.Samples {
- fmt.Fprintf(buf, "[%v][S] '%s': %s\n", intv.Interval, name, agg)
- }
- intv.RUnlock()
- }
-
- // Write out the bytes
- i.w.Write(buf.Bytes())
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/metrics.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/metrics.go
deleted file mode 100755
index b818e418..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/metrics.go
+++ /dev/null
@@ -1,115 +0,0 @@
-package metrics
-
-import (
- "runtime"
- "time"
-)
-
-func (m *Metrics) SetGauge(key []string, val float32) {
- if m.HostName != "" && m.EnableHostname {
- key = insert(0, m.HostName, key)
- }
- if m.EnableTypePrefix {
- key = insert(0, "gauge", key)
- }
- if m.ServiceName != "" {
- key = insert(0, m.ServiceName, key)
- }
- m.sink.SetGauge(key, val)
-}
-
-func (m *Metrics) EmitKey(key []string, val float32) {
- if m.EnableTypePrefix {
- key = insert(0, "kv", key)
- }
- if m.ServiceName != "" {
- key = insert(0, m.ServiceName, key)
- }
- m.sink.EmitKey(key, val)
-}
-
-func (m *Metrics) IncrCounter(key []string, val float32) {
- if m.EnableTypePrefix {
- key = insert(0, "counter", key)
- }
- if m.ServiceName != "" {
- key = insert(0, m.ServiceName, key)
- }
- m.sink.IncrCounter(key, val)
-}
-
-func (m *Metrics) AddSample(key []string, val float32) {
- if m.EnableTypePrefix {
- key = insert(0, "sample", key)
- }
- if m.ServiceName != "" {
- key = insert(0, m.ServiceName, key)
- }
- m.sink.AddSample(key, val)
-}
-
-func (m *Metrics) MeasureSince(key []string, start time.Time) {
- if m.EnableTypePrefix {
- key = insert(0, "timer", key)
- }
- if m.ServiceName != "" {
- key = insert(0, m.ServiceName, key)
- }
- now := time.Now()
- elapsed := now.Sub(start)
- msec := float32(elapsed.Nanoseconds()) / float32(m.TimerGranularity)
- m.sink.AddSample(key, msec)
-}
-
-// Periodically collects runtime stats to publish
-func (m *Metrics) collectStats() {
- for {
- time.Sleep(m.ProfileInterval)
- m.emitRuntimeStats()
- }
-}
-
-// Emits various runtime statsitics
-func (m *Metrics) emitRuntimeStats() {
- // Export number of Goroutines
- numRoutines := runtime.NumGoroutine()
- m.SetGauge([]string{"runtime", "num_goroutines"}, float32(numRoutines))
-
- // Export memory stats
- var stats runtime.MemStats
- runtime.ReadMemStats(&stats)
- m.SetGauge([]string{"runtime", "alloc_bytes"}, float32(stats.Alloc))
- m.SetGauge([]string{"runtime", "sys_bytes"}, float32(stats.Sys))
- m.SetGauge([]string{"runtime", "malloc_count"}, float32(stats.Mallocs))
- m.SetGauge([]string{"runtime", "free_count"}, float32(stats.Frees))
- m.SetGauge([]string{"runtime", "heap_objects"}, float32(stats.HeapObjects))
- m.SetGauge([]string{"runtime", "total_gc_pause_ns"}, float32(stats.PauseTotalNs))
- m.SetGauge([]string{"runtime", "total_gc_runs"}, float32(stats.NumGC))
-
- // Export info about the last few GC runs
- num := stats.NumGC
-
- // Handle wrap around
- if num < m.lastNumGC {
- m.lastNumGC = 0
- }
-
- // Ensure we don't scan more than 256
- if num-m.lastNumGC >= 256 {
- m.lastNumGC = num - 255
- }
-
- for i := m.lastNumGC; i < num; i++ {
- pause := stats.PauseNs[i%256]
- m.AddSample([]string{"runtime", "gc_pause_ns"}, float32(pause))
- }
- m.lastNumGC = num
-}
-
-// Inserts a string value at an index into the slice
-func insert(i int, v string, s []string) []string {
- s = append(s, "")
- copy(s[i+1:], s[i:])
- s[i] = v
- return s
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/sink.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/sink.go
deleted file mode 100755
index 0c240c2c..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/sink.go
+++ /dev/null
@@ -1,52 +0,0 @@
-package metrics
-
-// The MetricSink interface is used to transmit metrics information
-// to an external system
-type MetricSink interface {
- // A Gauge should retain the last value it is set to
- SetGauge(key []string, val float32)
-
- // Should emit a Key/Value pair for each call
- EmitKey(key []string, val float32)
-
- // Counters should accumulate values
- IncrCounter(key []string, val float32)
-
- // Samples are for timing information, where quantiles are used
- AddSample(key []string, val float32)
-}
-
-// BlackholeSink is used to just blackhole messages
-type BlackholeSink struct{}
-
-func (*BlackholeSink) SetGauge(key []string, val float32) {}
-func (*BlackholeSink) EmitKey(key []string, val float32) {}
-func (*BlackholeSink) IncrCounter(key []string, val float32) {}
-func (*BlackholeSink) AddSample(key []string, val float32) {}
-
-// FanoutSink is used to sink to fanout values to multiple sinks
-type FanoutSink []MetricSink
-
-func (fh FanoutSink) SetGauge(key []string, val float32) {
- for _, s := range fh {
- s.SetGauge(key, val)
- }
-}
-
-func (fh FanoutSink) EmitKey(key []string, val float32) {
- for _, s := range fh {
- s.EmitKey(key, val)
- }
-}
-
-func (fh FanoutSink) IncrCounter(key []string, val float32) {
- for _, s := range fh {
- s.IncrCounter(key, val)
- }
-}
-
-func (fh FanoutSink) AddSample(key []string, val float32) {
- for _, s := range fh {
- s.AddSample(key, val)
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/start.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/start.go
deleted file mode 100755
index 44113f10..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/start.go
+++ /dev/null
@@ -1,95 +0,0 @@
-package metrics
-
-import (
- "os"
- "time"
-)
-
-// Config is used to configure metrics settings
-type Config struct {
- ServiceName string // Prefixed with keys to seperate services
- HostName string // Hostname to use. If not provided and EnableHostname, it will be os.Hostname
- EnableHostname bool // Enable prefixing gauge values with hostname
- EnableRuntimeMetrics bool // Enables profiling of runtime metrics (GC, Goroutines, Memory)
- EnableTypePrefix bool // Prefixes key with a type ("counter", "gauge", "timer")
- TimerGranularity time.Duration // Granularity of timers.
- ProfileInterval time.Duration // Interval to profile runtime metrics
-}
-
-// Metrics represents an instance of a metrics sink that can
-// be used to emit
-type Metrics struct {
- Config
- lastNumGC uint32
- sink MetricSink
-}
-
-// Shared global metrics instance
-var globalMetrics *Metrics
-
-func init() {
- // Initialize to a blackhole sink to avoid errors
- globalMetrics = &Metrics{sink: &BlackholeSink{}}
-}
-
-// DefaultConfig provides a sane default configuration
-func DefaultConfig(serviceName string) *Config {
- c := &Config{
- ServiceName: serviceName, // Use client provided service
- HostName: "",
- EnableHostname: true, // Enable hostname prefix
- EnableRuntimeMetrics: true, // Enable runtime profiling
- EnableTypePrefix: false, // Disable type prefix
- TimerGranularity: time.Millisecond, // Timers are in milliseconds
- ProfileInterval: time.Second, // Poll runtime every second
- }
-
- // Try to get the hostname
- name, _ := os.Hostname()
- c.HostName = name
- return c
-}
-
-// New is used to create a new instance of Metrics
-func New(conf *Config, sink MetricSink) (*Metrics, error) {
- met := &Metrics{}
- met.Config = *conf
- met.sink = sink
-
- // Start the runtime collector
- if conf.EnableRuntimeMetrics {
- go met.collectStats()
- }
- return met, nil
-}
-
-// NewGlobal is the same as New, but it assigns the metrics object to be
-// used globally as well as returning it.
-func NewGlobal(conf *Config, sink MetricSink) (*Metrics, error) {
- metrics, err := New(conf, sink)
- if err == nil {
- globalMetrics = metrics
- }
- return metrics, err
-}
-
-// Proxy all the methods to the globalMetrics instance
-func SetGauge(key []string, val float32) {
- globalMetrics.SetGauge(key, val)
-}
-
-func EmitKey(key []string, val float32) {
- globalMetrics.EmitKey(key, val)
-}
-
-func IncrCounter(key []string, val float32) {
- globalMetrics.IncrCounter(key, val)
-}
-
-func AddSample(key []string, val float32) {
- globalMetrics.AddSample(key, val)
-}
-
-func MeasureSince(key []string, start time.Time) {
- globalMetrics.MeasureSince(key, start)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsd.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsd.go
deleted file mode 100644
index 65a5021a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsd.go
+++ /dev/null
@@ -1,154 +0,0 @@
-package metrics
-
-import (
- "bytes"
- "fmt"
- "log"
- "net"
- "strings"
- "time"
-)
-
-const (
- // statsdMaxLen is the maximum size of a packet
- // to send to statsd
- statsdMaxLen = 1400
-)
-
-// StatsdSink provides a MetricSink that can be used
-// with a statsite or statsd metrics server. It uses
-// only UDP packets, while StatsiteSink uses TCP.
-type StatsdSink struct {
- addr string
- metricQueue chan string
-}
-
-// NewStatsdSink is used to create a new StatsdSink
-func NewStatsdSink(addr string) (*StatsdSink, error) {
- s := &StatsdSink{
- addr: addr,
- metricQueue: make(chan string, 4096),
- }
- go s.flushMetrics()
- return s, nil
-}
-
-// Close is used to stop flushing to statsd
-func (s *StatsdSink) Shutdown() {
- close(s.metricQueue)
-}
-
-func (s *StatsdSink) SetGauge(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
-}
-
-func (s *StatsdSink) EmitKey(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val))
-}
-
-func (s *StatsdSink) IncrCounter(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
-}
-
-func (s *StatsdSink) AddSample(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
-}
-
-// Flattens the key for formatting, removes spaces
-func (s *StatsdSink) flattenKey(parts []string) string {
- joined := strings.Join(parts, ".")
- return strings.Map(func(r rune) rune {
- switch r {
- case ':':
- fallthrough
- case ' ':
- return '_'
- default:
- return r
- }
- }, joined)
-}
-
-// Does a non-blocking push to the metrics queue
-func (s *StatsdSink) pushMetric(m string) {
- select {
- case s.metricQueue <- m:
- default:
- }
-}
-
-// Flushes metrics
-func (s *StatsdSink) flushMetrics() {
- var sock net.Conn
- var err error
- var wait <-chan time.Time
- ticker := time.NewTicker(flushInterval)
- defer ticker.Stop()
-
-CONNECT:
- // Create a buffer
- buf := bytes.NewBuffer(nil)
-
- // Attempt to connect
- sock, err = net.Dial("udp", s.addr)
- if err != nil {
- log.Printf("[ERR] Error connecting to statsd! Err: %s", err)
- goto WAIT
- }
-
- for {
- select {
- case metric, ok := <-s.metricQueue:
- // Get a metric from the queue
- if !ok {
- goto QUIT
- }
-
- // Check if this would overflow the packet size
- if len(metric)+buf.Len() > statsdMaxLen {
- _, err := sock.Write(buf.Bytes())
- buf.Reset()
- if err != nil {
- log.Printf("[ERR] Error writing to statsd! Err: %s", err)
- goto WAIT
- }
- }
-
- // Append to the buffer
- buf.WriteString(metric)
-
- case <-ticker.C:
- if buf.Len() == 0 {
- continue
- }
-
- _, err := sock.Write(buf.Bytes())
- buf.Reset()
- if err != nil {
- log.Printf("[ERR] Error flushing to statsd! Err: %s", err)
- goto WAIT
- }
- }
- }
-
-WAIT:
- // Wait for a while
- wait = time.After(time.Duration(5) * time.Second)
- for {
- select {
- // Dequeue the messages to avoid backlog
- case _, ok := <-s.metricQueue:
- if !ok {
- goto QUIT
- }
- case <-wait:
- goto CONNECT
- }
- }
-QUIT:
- s.metricQueue = nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsite.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsite.go
deleted file mode 100755
index 68730139..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/armon/go-metrics/statsite.go
+++ /dev/null
@@ -1,142 +0,0 @@
-package metrics
-
-import (
- "bufio"
- "fmt"
- "log"
- "net"
- "strings"
- "time"
-)
-
-const (
- // We force flush the statsite metrics after this period of
- // inactivity. Prevents stats from getting stuck in a buffer
- // forever.
- flushInterval = 100 * time.Millisecond
-)
-
-// StatsiteSink provides a MetricSink that can be used with a
-// statsite metrics server
-type StatsiteSink struct {
- addr string
- metricQueue chan string
-}
-
-// NewStatsiteSink is used to create a new StatsiteSink
-func NewStatsiteSink(addr string) (*StatsiteSink, error) {
- s := &StatsiteSink{
- addr: addr,
- metricQueue: make(chan string, 4096),
- }
- go s.flushMetrics()
- return s, nil
-}
-
-// Close is used to stop flushing to statsite
-func (s *StatsiteSink) Shutdown() {
- close(s.metricQueue)
-}
-
-func (s *StatsiteSink) SetGauge(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|g\n", flatKey, val))
-}
-
-func (s *StatsiteSink) EmitKey(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|kv\n", flatKey, val))
-}
-
-func (s *StatsiteSink) IncrCounter(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|c\n", flatKey, val))
-}
-
-func (s *StatsiteSink) AddSample(key []string, val float32) {
- flatKey := s.flattenKey(key)
- s.pushMetric(fmt.Sprintf("%s:%f|ms\n", flatKey, val))
-}
-
-// Flattens the key for formatting, removes spaces
-func (s *StatsiteSink) flattenKey(parts []string) string {
- joined := strings.Join(parts, ".")
- return strings.Map(func(r rune) rune {
- switch r {
- case ':':
- fallthrough
- case ' ':
- return '_'
- default:
- return r
- }
- }, joined)
-}
-
-// Does a non-blocking push to the metrics queue
-func (s *StatsiteSink) pushMetric(m string) {
- select {
- case s.metricQueue <- m:
- default:
- }
-}
-
-// Flushes metrics
-func (s *StatsiteSink) flushMetrics() {
- var sock net.Conn
- var err error
- var wait <-chan time.Time
- var buffered *bufio.Writer
- ticker := time.NewTicker(flushInterval)
- defer ticker.Stop()
-
-CONNECT:
- // Attempt to connect
- sock, err = net.Dial("tcp", s.addr)
- if err != nil {
- log.Printf("[ERR] Error connecting to statsite! Err: %s", err)
- goto WAIT
- }
-
- // Create a buffered writer
- buffered = bufio.NewWriter(sock)
-
- for {
- select {
- case metric, ok := <-s.metricQueue:
- // Get a metric from the queue
- if !ok {
- goto QUIT
- }
-
- // Try to send to statsite
- _, err := buffered.Write([]byte(metric))
- if err != nil {
- log.Printf("[ERR] Error writing to statsite! Err: %s", err)
- goto WAIT
- }
- case <-ticker.C:
- if err := buffered.Flush(); err != nil {
- log.Printf("[ERR] Error flushing to statsite! Err: %s", err)
- goto WAIT
- }
- }
- }
-
-WAIT:
- // Wait for a while
- wait = time.After(time.Duration(5) * time.Second)
- for {
- select {
- // Dequeue the messages to avoid backlog
- case _, ok := <-s.metricQueue:
- if !ok {
- goto QUIT
- }
- case <-wait:
- goto CONNECT
- }
- }
-QUIT:
- s.metricQueue = nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any.go
new file mode 100644
index 00000000..b2af97f4
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any.go
@@ -0,0 +1,139 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2016 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package ptypes
+
+// This file implements functions to marshal proto.Message to/from
+// google.protobuf.Any message.
+
+import (
+ "fmt"
+ "reflect"
+ "strings"
+
+ "github.com/golang/protobuf/proto"
+ "github.com/golang/protobuf/ptypes/any"
+)
+
+const googleApis = "type.googleapis.com/"
+
+// AnyMessageName returns the name of the message contained in a google.protobuf.Any message.
+//
+// Note that regular type assertions should be done using the Is
+// function. AnyMessageName is provided for less common use cases like filtering a
+// sequence of Any messages based on a set of allowed message type names.
+func AnyMessageName(any *any.Any) (string, error) {
+ if any == nil {
+ return "", fmt.Errorf("message is nil")
+ }
+ slash := strings.LastIndex(any.TypeUrl, "/")
+ if slash < 0 {
+ return "", fmt.Errorf("message type url %q is invalid", any.TypeUrl)
+ }
+ return any.TypeUrl[slash+1:], nil
+}
+
+// MarshalAny takes the protocol buffer and encodes it into google.protobuf.Any.
+func MarshalAny(pb proto.Message) (*any.Any, error) {
+ value, err := proto.Marshal(pb)
+ if err != nil {
+ return nil, err
+ }
+ return &any.Any{TypeUrl: googleApis + proto.MessageName(pb), Value: value}, nil
+}
+
+// DynamicAny is a value that can be passed to UnmarshalAny to automatically
+// allocate a proto.Message for the type specified in a google.protobuf.Any
+// message. The allocated message is stored in the embedded proto.Message.
+//
+// Example:
+//
+// var x ptypes.DynamicAny
+// if err := ptypes.UnmarshalAny(a, &x); err != nil { ... }
+// fmt.Printf("unmarshaled message: %v", x.Message)
+type DynamicAny struct {
+ proto.Message
+}
+
+// Empty returns a new proto.Message of the type specified in a
+// google.protobuf.Any message. It returns an error if corresponding message
+// type isn't linked in.
+func Empty(any *any.Any) (proto.Message, error) {
+ aname, err := AnyMessageName(any)
+ if err != nil {
+ return nil, err
+ }
+
+ t := proto.MessageType(aname)
+ if t == nil {
+ return nil, fmt.Errorf("any: message type %q isn't linked in", aname)
+ }
+ return reflect.New(t.Elem()).Interface().(proto.Message), nil
+}
+
+// UnmarshalAny parses the protocol buffer representation in a google.protobuf.Any
+// message and places the decoded result in pb. It returns an error if type of
+// contents of Any message does not match type of pb message.
+//
+// pb can be a proto.Message, or a *DynamicAny.
+func UnmarshalAny(any *any.Any, pb proto.Message) error {
+ if d, ok := pb.(*DynamicAny); ok {
+ if d.Message == nil {
+ var err error
+ d.Message, err = Empty(any)
+ if err != nil {
+ return err
+ }
+ }
+ return UnmarshalAny(any, d.Message)
+ }
+
+ aname, err := AnyMessageName(any)
+ if err != nil {
+ return err
+ }
+
+ mname := proto.MessageName(pb)
+ if aname != mname {
+ return fmt.Errorf("mismatched message type: got %q want %q", aname, mname)
+ }
+ return proto.Unmarshal(any.Value, pb)
+}
+
+// Is returns true if any value contains a given message type.
+func Is(any *any.Any, pb proto.Message) bool {
+ aname, err := AnyMessageName(any)
+ if err != nil {
+ return false
+ }
+
+ return aname == proto.MessageName(pb)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go
new file mode 100644
index 00000000..1fbaa44c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.pb.go
@@ -0,0 +1,168 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// source: github.com/golang/protobuf/ptypes/any/any.proto
+
+/*
+Package any is a generated protocol buffer package.
+
+It is generated from these files:
+ github.com/golang/protobuf/ptypes/any/any.proto
+
+It has these top-level messages:
+ Any
+*/
+package any
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+// `Any` contains an arbitrary serialized protocol buffer message along with a
+// URL that describes the type of the serialized message.
+//
+// Protobuf library provides support to pack/unpack Any values in the form
+// of utility functions or additional generated methods of the Any type.
+//
+// Example 1: Pack and unpack a message in C++.
+//
+// Foo foo = ...;
+// Any any;
+// any.PackFrom(foo);
+// ...
+// if (any.UnpackTo(&foo)) {
+// ...
+// }
+//
+// Example 2: Pack and unpack a message in Java.
+//
+// Foo foo = ...;
+// Any any = Any.pack(foo);
+// ...
+// if (any.is(Foo.class)) {
+// foo = any.unpack(Foo.class);
+// }
+//
+// Example 3: Pack and unpack a message in Python.
+//
+// foo = Foo(...)
+// any = Any()
+// any.Pack(foo)
+// ...
+// if any.Is(Foo.DESCRIPTOR):
+// any.Unpack(foo)
+// ...
+//
+// The pack methods provided by protobuf library will by default use
+// 'type.googleapis.com/full.type.name' as the type URL and the unpack
+// methods only use the fully qualified type name after the last '/'
+// in the type URL, for example "foo.bar.com/x/y.z" will yield type
+// name "y.z".
+//
+//
+// JSON
+// ====
+// The JSON representation of an `Any` value uses the regular
+// representation of the deserialized, embedded message, with an
+// additional field `@type` which contains the type URL. Example:
+//
+// package google.profile;
+// message Person {
+// string first_name = 1;
+// string last_name = 2;
+// }
+//
+// {
+// "@type": "type.googleapis.com/google.profile.Person",
+// "firstName": <string>,
+// "lastName": <string>
+// }
+//
+// If the embedded message type is well-known and has a custom JSON
+// representation, that representation will be embedded adding a field
+// `value` which holds the custom JSON in addition to the `@type`
+// field. Example (for message [google.protobuf.Duration][]):
+//
+// {
+// "@type": "type.googleapis.com/google.protobuf.Duration",
+// "value": "1.212s"
+// }
+//
+type Any struct {
+ // A URL/resource name whose content describes the type of the
+ // serialized protocol buffer message.
+ //
+ // For URLs which use the scheme `http`, `https`, or no scheme, the
+ // following restrictions and interpretations apply:
+ //
+ // * If no scheme is provided, `https` is assumed.
+ // * The last segment of the URL's path must represent the fully
+ // qualified name of the type (as in `path/google.protobuf.Duration`).
+ // The name should be in a canonical form (e.g., leading "." is
+ // not accepted).
+ // * An HTTP GET on the URL must yield a [google.protobuf.Type][]
+ // value in binary format, or produce an error.
+ // * Applications are allowed to cache lookup results based on the
+ // URL, or have them precompiled into a binary to avoid any
+ // lookup. Therefore, binary compatibility needs to be preserved
+ // on changes to types. (Use versioned type names to manage
+ // breaking changes.)
+ //
+ // Schemes other than `http`, `https` (or the empty scheme) might be
+ // used with implementation specific semantics.
+ //
+ TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl" json:"type_url,omitempty"`
+ // Must be a valid serialized protocol buffer of the above specified type.
+ Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+}
+
+func (m *Any) Reset() { *m = Any{} }
+func (m *Any) String() string { return proto.CompactTextString(m) }
+func (*Any) ProtoMessage() {}
+func (*Any) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+func (*Any) XXX_WellKnownType() string { return "Any" }
+
+func (m *Any) GetTypeUrl() string {
+ if m != nil {
+ return m.TypeUrl
+ }
+ return ""
+}
+
+func (m *Any) GetValue() []byte {
+ if m != nil {
+ return m.Value
+ }
+ return nil
+}
+
+func init() {
+ proto.RegisterType((*Any)(nil), "google.protobuf.Any")
+}
+
+func init() { proto.RegisterFile("github.com/golang/protobuf/ptypes/any/any.proto", fileDescriptor0) }
+
+var fileDescriptor0 = []byte{
+ // 184 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xd2, 0x4f, 0xcf, 0x2c, 0xc9,
+ 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
+ 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0xcc,
+ 0xab, 0x04, 0x61, 0x3d, 0xb0, 0xb8, 0x10, 0x7f, 0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x1e, 0x4c,
+ 0x95, 0x92, 0x19, 0x17, 0xb3, 0x63, 0x5e, 0xa5, 0x90, 0x24, 0x17, 0x07, 0x48, 0x79, 0x7c, 0x69,
+ 0x51, 0x8e, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10, 0x3b, 0x88, 0x1f, 0x5a, 0x94, 0x23, 0x24,
+ 0xc2, 0xc5, 0x5a, 0x96, 0x98, 0x53, 0x9a, 0x2a, 0xc1, 0xa4, 0xc0, 0xa8, 0xc1, 0x13, 0x04, 0xe1,
+ 0x38, 0xe5, 0x73, 0x09, 0x27, 0xe7, 0xe7, 0xea, 0xa1, 0x19, 0xe7, 0xc4, 0xe1, 0x98, 0x57, 0x19,
+ 0x00, 0xe2, 0x04, 0x30, 0x46, 0xa9, 0x12, 0xe5, 0xb8, 0x45, 0x4c, 0xcc, 0xee, 0x01, 0x4e, 0xab,
+ 0x98, 0xe4, 0xdc, 0x21, 0x46, 0x05, 0x40, 0x95, 0xe8, 0x85, 0xa7, 0xe6, 0xe4, 0x78, 0xe7, 0xe5,
+ 0x97, 0xe7, 0x85, 0x80, 0x94, 0x26, 0xb1, 0x81, 0xf5, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff,
+ 0x45, 0x1f, 0x1a, 0xf2, 0xf3, 0x00, 0x00, 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.proto b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.proto
new file mode 100644
index 00000000..9bd3f50a
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/any/any.proto
@@ -0,0 +1,139 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option go_package = "github.com/golang/protobuf/ptypes/any";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "AnyProto";
+option java_multiple_files = true;
+option objc_class_prefix = "GPB";
+
+// `Any` contains an arbitrary serialized protocol buffer message along with a
+// URL that describes the type of the serialized message.
+//
+// Protobuf library provides support to pack/unpack Any values in the form
+// of utility functions or additional generated methods of the Any type.
+//
+// Example 1: Pack and unpack a message in C++.
+//
+// Foo foo = ...;
+// Any any;
+// any.PackFrom(foo);
+// ...
+// if (any.UnpackTo(&foo)) {
+// ...
+// }
+//
+// Example 2: Pack and unpack a message in Java.
+//
+// Foo foo = ...;
+// Any any = Any.pack(foo);
+// ...
+// if (any.is(Foo.class)) {
+// foo = any.unpack(Foo.class);
+// }
+//
+// Example 3: Pack and unpack a message in Python.
+//
+// foo = Foo(...)
+// any = Any()
+// any.Pack(foo)
+// ...
+// if any.Is(Foo.DESCRIPTOR):
+// any.Unpack(foo)
+// ...
+//
+// The pack methods provided by protobuf library will by default use
+// 'type.googleapis.com/full.type.name' as the type URL and the unpack
+// methods only use the fully qualified type name after the last '/'
+// in the type URL, for example "foo.bar.com/x/y.z" will yield type
+// name "y.z".
+//
+//
+// JSON
+// ====
+// The JSON representation of an `Any` value uses the regular
+// representation of the deserialized, embedded message, with an
+// additional field `@type` which contains the type URL. Example:
+//
+// package google.profile;
+// message Person {
+// string first_name = 1;
+// string last_name = 2;
+// }
+//
+// {
+// "@type": "type.googleapis.com/google.profile.Person",
+// "firstName": <string>,
+// "lastName": <string>
+// }
+//
+// If the embedded message type is well-known and has a custom JSON
+// representation, that representation will be embedded adding a field
+// `value` which holds the custom JSON in addition to the `@type`
+// field. Example (for message [google.protobuf.Duration][]):
+//
+// {
+// "@type": "type.googleapis.com/google.protobuf.Duration",
+// "value": "1.212s"
+// }
+//
+message Any {
+ // A URL/resource name whose content describes the type of the
+ // serialized protocol buffer message.
+ //
+ // For URLs which use the scheme `http`, `https`, or no scheme, the
+ // following restrictions and interpretations apply:
+ //
+ // * If no scheme is provided, `https` is assumed.
+ // * The last segment of the URL's path must represent the fully
+ // qualified name of the type (as in `path/google.protobuf.Duration`).
+ // The name should be in a canonical form (e.g., leading "." is
+ // not accepted).
+ // * An HTTP GET on the URL must yield a [google.protobuf.Type][]
+ // value in binary format, or produce an error.
+ // * Applications are allowed to cache lookup results based on the
+ // URL, or have them precompiled into a binary to avoid any
+ // lookup. Therefore, binary compatibility needs to be preserved
+ // on changes to types. (Use versioned type names to manage
+ // breaking changes.)
+ //
+ // Schemes other than `http`, `https` (or the empty scheme) might be
+ // used with implementation specific semantics.
+ //
+ string type_url = 1;
+
+ // Must be a valid serialized protocol buffer of the above specified type.
+ bytes value = 2;
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/doc.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/doc.go
new file mode 100644
index 00000000..c0d595da
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/doc.go
@@ -0,0 +1,35 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2016 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+/*
+Package ptypes contains code for interacting with well-known types.
+*/
+package ptypes
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration.go
new file mode 100644
index 00000000..65cb0f8e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration.go
@@ -0,0 +1,102 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2016 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package ptypes
+
+// This file implements conversions between google.protobuf.Duration
+// and time.Duration.
+
+import (
+ "errors"
+ "fmt"
+ "time"
+
+ durpb "github.com/golang/protobuf/ptypes/duration"
+)
+
+const (
+ // Range of a durpb.Duration in seconds, as specified in
+ // google/protobuf/duration.proto. This is about 10,000 years in seconds.
+ maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60)
+ minSeconds = -maxSeconds
+)
+
+// validateDuration determines whether the durpb.Duration is valid according to the
+// definition in google/protobuf/duration.proto. A valid durpb.Duration
+// may still be too large to fit into a time.Duration (the range of durpb.Duration
+// is about 10,000 years, and the range of time.Duration is about 290).
+func validateDuration(d *durpb.Duration) error {
+ if d == nil {
+ return errors.New("duration: nil Duration")
+ }
+ if d.Seconds < minSeconds || d.Seconds > maxSeconds {
+ return fmt.Errorf("duration: %v: seconds out of range", d)
+ }
+ if d.Nanos <= -1e9 || d.Nanos >= 1e9 {
+ return fmt.Errorf("duration: %v: nanos out of range", d)
+ }
+ // Seconds and Nanos must have the same sign, unless d.Nanos is zero.
+ if (d.Seconds < 0 && d.Nanos > 0) || (d.Seconds > 0 && d.Nanos < 0) {
+ return fmt.Errorf("duration: %v: seconds and nanos have different signs", d)
+ }
+ return nil
+}
+
+// Duration converts a durpb.Duration to a time.Duration. Duration
+// returns an error if the durpb.Duration is invalid or is too large to be
+// represented in a time.Duration.
+func Duration(p *durpb.Duration) (time.Duration, error) {
+ if err := validateDuration(p); err != nil {
+ return 0, err
+ }
+ d := time.Duration(p.Seconds) * time.Second
+ if int64(d/time.Second) != p.Seconds {
+ return 0, fmt.Errorf("duration: %v is out of range for time.Duration", p)
+ }
+ if p.Nanos != 0 {
+ d += time.Duration(p.Nanos)
+ if (d < 0) != (p.Nanos < 0) {
+ return 0, fmt.Errorf("duration: %v is out of range for time.Duration", p)
+ }
+ }
+ return d, nil
+}
+
+// DurationProto converts a time.Duration to a durpb.Duration.
+func DurationProto(d time.Duration) *durpb.Duration {
+ nanos := d.Nanoseconds()
+ secs := nanos / 1e9
+ nanos -= secs * 1e9
+ return &durpb.Duration{
+ Seconds: secs,
+ Nanos: int32(nanos),
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.pb.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.pb.go
new file mode 100644
index 00000000..fe3350be
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.pb.go
@@ -0,0 +1,146 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// source: github.com/golang/protobuf/ptypes/duration/duration.proto
+
+/*
+Package duration is a generated protocol buffer package.
+
+It is generated from these files:
+ github.com/golang/protobuf/ptypes/duration/duration.proto
+
+It has these top-level messages:
+ Duration
+*/
+package duration
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+// A Duration represents a signed, fixed-length span of time represented
+// as a count of seconds and fractions of seconds at nanosecond
+// resolution. It is independent of any calendar and concepts like "day"
+// or "month". It is related to Timestamp in that the difference between
+// two Timestamp values is a Duration and it can be added or subtracted
+// from a Timestamp. Range is approximately +-10,000 years.
+//
+// # Examples
+//
+// Example 1: Compute Duration from two Timestamps in pseudo code.
+//
+// Timestamp start = ...;
+// Timestamp end = ...;
+// Duration duration = ...;
+//
+// duration.seconds = end.seconds - start.seconds;
+// duration.nanos = end.nanos - start.nanos;
+//
+// if (duration.seconds < 0 && duration.nanos > 0) {
+// duration.seconds += 1;
+// duration.nanos -= 1000000000;
+// } else if (durations.seconds > 0 && duration.nanos < 0) {
+// duration.seconds -= 1;
+// duration.nanos += 1000000000;
+// }
+//
+// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
+//
+// Timestamp start = ...;
+// Duration duration = ...;
+// Timestamp end = ...;
+//
+// end.seconds = start.seconds + duration.seconds;
+// end.nanos = start.nanos + duration.nanos;
+//
+// if (end.nanos < 0) {
+// end.seconds -= 1;
+// end.nanos += 1000000000;
+// } else if (end.nanos >= 1000000000) {
+// end.seconds += 1;
+// end.nanos -= 1000000000;
+// }
+//
+// Example 3: Compute Duration from datetime.timedelta in Python.
+//
+// td = datetime.timedelta(days=3, minutes=10)
+// duration = Duration()
+// duration.FromTimedelta(td)
+//
+// # JSON Mapping
+//
+// In JSON format, the Duration type is encoded as a string rather than an
+// object, where the string ends in the suffix "s" (indicating seconds) and
+// is preceded by the number of seconds, with nanoseconds expressed as
+// fractional seconds. For example, 3 seconds with 0 nanoseconds should be
+// encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should
+// be expressed in JSON format as "3.000000001s", and 3 seconds and 1
+// microsecond should be expressed in JSON format as "3.000001s".
+//
+//
+type Duration struct {
+ // Signed seconds of the span of time. Must be from -315,576,000,000
+ // to +315,576,000,000 inclusive. Note: these bounds are computed from:
+ // 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years
+ Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
+ // Signed fractions of a second at nanosecond resolution of the span
+ // of time. Durations less than one second are represented with a 0
+ // `seconds` field and a positive or negative `nanos` field. For durations
+ // of one second or more, a non-zero value for the `nanos` field must be
+ // of the same sign as the `seconds` field. Must be from -999,999,999
+ // to +999,999,999 inclusive.
+ Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
+}
+
+func (m *Duration) Reset() { *m = Duration{} }
+func (m *Duration) String() string { return proto.CompactTextString(m) }
+func (*Duration) ProtoMessage() {}
+func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+func (*Duration) XXX_WellKnownType() string { return "Duration" }
+
+func (m *Duration) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Duration) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
+func init() {
+ proto.RegisterType((*Duration)(nil), "google.protobuf.Duration")
+}
+
+func init() {
+ proto.RegisterFile("github.com/golang/protobuf/ptypes/duration/duration.proto", fileDescriptor0)
+}
+
+var fileDescriptor0 = []byte{
+ // 189 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
+ 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
+ 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0x29,
+ 0x2d, 0x4a, 0x2c, 0xc9, 0xcc, 0xcf, 0x83, 0x33, 0xf4, 0xc0, 0x2a, 0x84, 0xf8, 0xd3, 0xf3, 0xf3,
+ 0xd3, 0x73, 0x52, 0xf5, 0x60, 0xea, 0x95, 0xac, 0xb8, 0x38, 0x5c, 0xa0, 0x4a, 0x84, 0x24, 0xb8,
+ 0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83, 0x60,
+ 0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d, 0xd6,
+ 0x20, 0x08, 0xc7, 0xa9, 0x86, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x48, 0x27, 0x5e, 0x98,
+ 0x81, 0x01, 0x20, 0x91, 0x00, 0xc6, 0x28, 0x2d, 0xe2, 0xdd, 0xfb, 0x83, 0x91, 0x71, 0x11, 0x13,
+ 0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xb9, 0x01, 0x50, 0xa5, 0x7a, 0xe1, 0xa9,
+ 0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x2d, 0x49, 0x6c, 0x60, 0x33, 0x8c, 0x01,
+ 0x01, 0x00, 0x00, 0xff, 0xff, 0x45, 0x5a, 0x81, 0x3d, 0x0e, 0x01, 0x00, 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.proto b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.proto
new file mode 100644
index 00000000..975fce41
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/duration/duration.proto
@@ -0,0 +1,117 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option cc_enable_arenas = true;
+option go_package = "github.com/golang/protobuf/ptypes/duration";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "DurationProto";
+option java_multiple_files = true;
+option objc_class_prefix = "GPB";
+
+// A Duration represents a signed, fixed-length span of time represented
+// as a count of seconds and fractions of seconds at nanosecond
+// resolution. It is independent of any calendar and concepts like "day"
+// or "month". It is related to Timestamp in that the difference between
+// two Timestamp values is a Duration and it can be added or subtracted
+// from a Timestamp. Range is approximately +-10,000 years.
+//
+// # Examples
+//
+// Example 1: Compute Duration from two Timestamps in pseudo code.
+//
+// Timestamp start = ...;
+// Timestamp end = ...;
+// Duration duration = ...;
+//
+// duration.seconds = end.seconds - start.seconds;
+// duration.nanos = end.nanos - start.nanos;
+//
+// if (duration.seconds < 0 && duration.nanos > 0) {
+// duration.seconds += 1;
+// duration.nanos -= 1000000000;
+// } else if (durations.seconds > 0 && duration.nanos < 0) {
+// duration.seconds -= 1;
+// duration.nanos += 1000000000;
+// }
+//
+// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
+//
+// Timestamp start = ...;
+// Duration duration = ...;
+// Timestamp end = ...;
+//
+// end.seconds = start.seconds + duration.seconds;
+// end.nanos = start.nanos + duration.nanos;
+//
+// if (end.nanos < 0) {
+// end.seconds -= 1;
+// end.nanos += 1000000000;
+// } else if (end.nanos >= 1000000000) {
+// end.seconds += 1;
+// end.nanos -= 1000000000;
+// }
+//
+// Example 3: Compute Duration from datetime.timedelta in Python.
+//
+// td = datetime.timedelta(days=3, minutes=10)
+// duration = Duration()
+// duration.FromTimedelta(td)
+//
+// # JSON Mapping
+//
+// In JSON format, the Duration type is encoded as a string rather than an
+// object, where the string ends in the suffix "s" (indicating seconds) and
+// is preceded by the number of seconds, with nanoseconds expressed as
+// fractional seconds. For example, 3 seconds with 0 nanoseconds should be
+// encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should
+// be expressed in JSON format as "3.000000001s", and 3 seconds and 1
+// microsecond should be expressed in JSON format as "3.000001s".
+//
+//
+message Duration {
+
+ // Signed seconds of the span of time. Must be from -315,576,000,000
+ // to +315,576,000,000 inclusive. Note: these bounds are computed from:
+ // 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years
+ int64 seconds = 1;
+
+ // Signed fractions of a second at nanosecond resolution of the span
+ // of time. Durations less than one second are represented with a 0
+ // `seconds` field and a positive or negative `nanos` field. For durations
+ // of one second or more, a non-zero value for the `nanos` field must be
+ // of the same sign as the `seconds` field. Must be from -999,999,999
+ // to +999,999,999 inclusive.
+ int32 nanos = 2;
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/regen.sh b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/regen.sh
new file mode 100755
index 00000000..2a5b4e8b
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/regen.sh
@@ -0,0 +1,66 @@
+#!/bin/bash -e
+#
+# This script fetches and rebuilds the "well-known types" protocol buffers.
+# To run this you will need protoc and goprotobuf installed;
+# see https://github.com/golang/protobuf for instructions.
+# You also need Go and Git installed.
+
+PKG=github.com/golang/protobuf/ptypes
+UPSTREAM=https://github.com/google/protobuf
+UPSTREAM_SUBDIR=src/google/protobuf
+PROTO_FILES='
+ any.proto
+ duration.proto
+ empty.proto
+ struct.proto
+ timestamp.proto
+ wrappers.proto
+'
+
+function die() {
+ echo 1>&2 $*
+ exit 1
+}
+
+# Sanity check that the right tools are accessible.
+for tool in go git protoc protoc-gen-go; do
+ q=$(which $tool) || die "didn't find $tool"
+ echo 1>&2 "$tool: $q"
+done
+
+tmpdir=$(mktemp -d -t regen-wkt.XXXXXX)
+trap 'rm -rf $tmpdir' EXIT
+
+echo -n 1>&2 "finding package dir... "
+pkgdir=$(go list -f '{{.Dir}}' $PKG)
+echo 1>&2 $pkgdir
+base=$(echo $pkgdir | sed "s,/$PKG\$,,")
+echo 1>&2 "base: $base"
+cd $base
+
+echo 1>&2 "fetching latest protos... "
+git clone -q $UPSTREAM $tmpdir
+# Pass 1: build mapping from upstream filename to our filename.
+declare -A filename_map
+for f in $(cd $PKG && find * -name '*.proto'); do
+ echo -n 1>&2 "looking for latest version of $f... "
+ up=$(cd $tmpdir/$UPSTREAM_SUBDIR && find * -name $(basename $f) | grep -v /testdata/)
+ echo 1>&2 $up
+ if [ $(echo $up | wc -w) != "1" ]; then
+ die "not exactly one match"
+ fi
+ filename_map[$up]=$f
+done
+# Pass 2: copy files
+for up in "${!filename_map[@]}"; do
+ f=${filename_map[$up]}
+ shortname=$(basename $f | sed 's,\.proto$,,')
+ cp $tmpdir/$UPSTREAM_SUBDIR/$up $PKG/$f
+done
+
+# Run protoc once per package.
+for dir in $(find $PKG -name '*.proto' | xargs dirname | sort | uniq); do
+ echo 1>&2 "* $dir"
+ protoc --go_out=. $dir/*.proto
+done
+echo 1>&2 "All OK"
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp.go
new file mode 100644
index 00000000..47f10dbc
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp.go
@@ -0,0 +1,134 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2016 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package ptypes
+
+// This file implements operations on google.protobuf.Timestamp.
+
+import (
+ "errors"
+ "fmt"
+ "time"
+
+ tspb "github.com/golang/protobuf/ptypes/timestamp"
+)
+
+const (
+ // Seconds field of the earliest valid Timestamp.
+ // This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
+ minValidSeconds = -62135596800
+ // Seconds field just after the latest valid Timestamp.
+ // This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
+ maxValidSeconds = 253402300800
+)
+
+// validateTimestamp determines whether a Timestamp is valid.
+// A valid timestamp represents a time in the range
+// [0001-01-01, 10000-01-01) and has a Nanos field
+// in the range [0, 1e9).
+//
+// If the Timestamp is valid, validateTimestamp returns nil.
+// Otherwise, it returns an error that describes
+// the problem.
+//
+// Every valid Timestamp can be represented by a time.Time, but the converse is not true.
+func validateTimestamp(ts *tspb.Timestamp) error {
+ if ts == nil {
+ return errors.New("timestamp: nil Timestamp")
+ }
+ if ts.Seconds < minValidSeconds {
+ return fmt.Errorf("timestamp: %v before 0001-01-01", ts)
+ }
+ if ts.Seconds >= maxValidSeconds {
+ return fmt.Errorf("timestamp: %v after 10000-01-01", ts)
+ }
+ if ts.Nanos < 0 || ts.Nanos >= 1e9 {
+ return fmt.Errorf("timestamp: %v: nanos not in range [0, 1e9)", ts)
+ }
+ return nil
+}
+
+// Timestamp converts a google.protobuf.Timestamp proto to a time.Time.
+// It returns an error if the argument is invalid.
+//
+// Unlike most Go functions, if Timestamp returns an error, the first return value
+// is not the zero time.Time. Instead, it is the value obtained from the
+// time.Unix function when passed the contents of the Timestamp, in the UTC
+// locale. This may or may not be a meaningful time; many invalid Timestamps
+// do map to valid time.Times.
+//
+// A nil Timestamp returns an error. The first return value in that case is
+// undefined.
+func Timestamp(ts *tspb.Timestamp) (time.Time, error) {
+ // Don't return the zero value on error, because corresponds to a valid
+ // timestamp. Instead return whatever time.Unix gives us.
+ var t time.Time
+ if ts == nil {
+ t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
+ } else {
+ t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
+ }
+ return t, validateTimestamp(ts)
+}
+
+// TimestampNow returns a google.protobuf.Timestamp for the current time.
+func TimestampNow() *tspb.Timestamp {
+ ts, err := TimestampProto(time.Now())
+ if err != nil {
+ panic("ptypes: time.Now() out of Timestamp range")
+ }
+ return ts
+}
+
+// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
+// It returns an error if the resulting Timestamp is invalid.
+func TimestampProto(t time.Time) (*tspb.Timestamp, error) {
+ seconds := t.Unix()
+ nanos := int32(t.Sub(time.Unix(seconds, 0)))
+ ts := &tspb.Timestamp{
+ Seconds: seconds,
+ Nanos: nanos,
+ }
+ if err := validateTimestamp(ts); err != nil {
+ return nil, err
+ }
+ return ts, nil
+}
+
+// TimestampString returns the RFC 3339 string for valid Timestamps. For invalid
+// Timestamps, it returns an error message in parentheses.
+func TimestampString(ts *tspb.Timestamp) string {
+ t, err := Timestamp(ts)
+ if err != nil {
+ return fmt.Sprintf("(%v)", err)
+ }
+ return t.Format(time.RFC3339Nano)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.pb.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.pb.go
new file mode 100644
index 00000000..3b76261e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.pb.go
@@ -0,0 +1,162 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// source: github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
+
+/*
+Package timestamp is a generated protocol buffer package.
+
+It is generated from these files:
+ github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
+
+It has these top-level messages:
+ Timestamp
+*/
+package timestamp
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+// A Timestamp represents a point in time independent of any time zone
+// or calendar, represented as seconds and fractions of seconds at
+// nanosecond resolution in UTC Epoch time. It is encoded using the
+// Proleptic Gregorian Calendar which extends the Gregorian calendar
+// backwards to year one. It is encoded assuming all minutes are 60
+// seconds long, i.e. leap seconds are "smeared" so that no leap second
+// table is needed for interpretation. Range is from
+// 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
+// By restricting to that range, we ensure that we can convert to
+// and from RFC 3339 date strings.
+// See [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
+//
+// # Examples
+//
+// Example 1: Compute Timestamp from POSIX `time()`.
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(time(NULL));
+// timestamp.set_nanos(0);
+//
+// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
+//
+// struct timeval tv;
+// gettimeofday(&tv, NULL);
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(tv.tv_sec);
+// timestamp.set_nanos(tv.tv_usec * 1000);
+//
+// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
+//
+// FILETIME ft;
+// GetSystemTimeAsFileTime(&ft);
+// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
+//
+// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
+// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
+// Timestamp timestamp;
+// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
+// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
+//
+// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
+//
+// long millis = System.currentTimeMillis();
+//
+// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
+// .setNanos((int) ((millis % 1000) * 1000000)).build();
+//
+//
+// Example 5: Compute Timestamp from current time in Python.
+//
+// timestamp = Timestamp()
+// timestamp.GetCurrentTime()
+//
+// # JSON Mapping
+//
+// In JSON format, the Timestamp type is encoded as a string in the
+// [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format. That is, the
+// format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z"
+// where {year} is always expressed using four digits while {month}, {day},
+// {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional
+// seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution),
+// are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone
+// is required, though only UTC (as indicated by "Z") is presently supported.
+//
+// For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past
+// 01:30 UTC on January 15, 2017.
+//
+// In JavaScript, one can convert a Date object to this format using the
+// standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString]
+// method. In Python, a standard `datetime.datetime` object can be converted
+// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
+// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
+// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
+// http://joda-time.sourceforge.net/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime())
+// to obtain a formatter capable of generating timestamps in this format.
+//
+//
+type Timestamp struct {
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
+}
+
+func (m *Timestamp) Reset() { *m = Timestamp{} }
+func (m *Timestamp) String() string { return proto.CompactTextString(m) }
+func (*Timestamp) ProtoMessage() {}
+func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+func (*Timestamp) XXX_WellKnownType() string { return "Timestamp" }
+
+func (m *Timestamp) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Timestamp) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
+func init() {
+ proto.RegisterType((*Timestamp)(nil), "google.protobuf.Timestamp")
+}
+
+func init() {
+ proto.RegisterFile("github.com/golang/protobuf/ptypes/timestamp/timestamp.proto", fileDescriptor0)
+}
+
+var fileDescriptor0 = []byte{
+ // 190 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0x2c, 0xc9,
+ 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
+ 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2f, 0xc9,
+ 0xcc, 0x4d, 0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0x40, 0xb0, 0xf4, 0xc0, 0x6a, 0x84, 0xf8, 0xd3, 0xf3,
+ 0xf3, 0xd3, 0x73, 0x52, 0xf5, 0x60, 0x3a, 0x94, 0xac, 0xb9, 0x38, 0x43, 0x60, 0x6a, 0x84, 0x24,
+ 0xb8, 0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83,
+ 0x60, 0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d,
+ 0xd6, 0x20, 0x08, 0xc7, 0xa9, 0x8e, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x4c, 0x27, 0x3e,
+ 0xb8, 0x89, 0x01, 0x20, 0xa1, 0x00, 0xc6, 0x28, 0x6d, 0x12, 0xdc, 0xfc, 0x83, 0x91, 0x71, 0x11,
+ 0x13, 0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xc9, 0x01, 0x50, 0xb5, 0x7a, 0xe1,
+ 0xa9, 0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x3d, 0x49, 0x6c, 0x60, 0x43, 0x8c,
+ 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x6b, 0x59, 0x0a, 0x4d, 0x13, 0x01, 0x00, 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
new file mode 100644
index 00000000..b7cbd175
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
@@ -0,0 +1,133 @@
+// Protocol Buffers - Google's data interchange format
+// Copyright 2008 Google Inc. All rights reserved.
+// https://developers.google.com/protocol-buffers/
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+syntax = "proto3";
+
+package google.protobuf;
+
+option csharp_namespace = "Google.Protobuf.WellKnownTypes";
+option cc_enable_arenas = true;
+option go_package = "github.com/golang/protobuf/ptypes/timestamp";
+option java_package = "com.google.protobuf";
+option java_outer_classname = "TimestampProto";
+option java_multiple_files = true;
+option objc_class_prefix = "GPB";
+
+// A Timestamp represents a point in time independent of any time zone
+// or calendar, represented as seconds and fractions of seconds at
+// nanosecond resolution in UTC Epoch time. It is encoded using the
+// Proleptic Gregorian Calendar which extends the Gregorian calendar
+// backwards to year one. It is encoded assuming all minutes are 60
+// seconds long, i.e. leap seconds are "smeared" so that no leap second
+// table is needed for interpretation. Range is from
+// 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
+// By restricting to that range, we ensure that we can convert to
+// and from RFC 3339 date strings.
+// See [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
+//
+// # Examples
+//
+// Example 1: Compute Timestamp from POSIX `time()`.
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(time(NULL));
+// timestamp.set_nanos(0);
+//
+// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
+//
+// struct timeval tv;
+// gettimeofday(&tv, NULL);
+//
+// Timestamp timestamp;
+// timestamp.set_seconds(tv.tv_sec);
+// timestamp.set_nanos(tv.tv_usec * 1000);
+//
+// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
+//
+// FILETIME ft;
+// GetSystemTimeAsFileTime(&ft);
+// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
+//
+// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
+// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
+// Timestamp timestamp;
+// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
+// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
+//
+// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
+//
+// long millis = System.currentTimeMillis();
+//
+// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
+// .setNanos((int) ((millis % 1000) * 1000000)).build();
+//
+//
+// Example 5: Compute Timestamp from current time in Python.
+//
+// timestamp = Timestamp()
+// timestamp.GetCurrentTime()
+//
+// # JSON Mapping
+//
+// In JSON format, the Timestamp type is encoded as a string in the
+// [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format. That is, the
+// format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z"
+// where {year} is always expressed using four digits while {month}, {day},
+// {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional
+// seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution),
+// are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone
+// is required, though only UTC (as indicated by "Z") is presently supported.
+//
+// For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past
+// 01:30 UTC on January 15, 2017.
+//
+// In JavaScript, one can convert a Date object to this format using the
+// standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString]
+// method. In Python, a standard `datetime.datetime` object can be converted
+// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
+// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
+// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
+// http://joda-time.sourceforge.net/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime())
+// to obtain a formatter capable of generating timestamps in this format.
+//
+//
+message Timestamp {
+
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ int64 seconds = 1;
+
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ int32 nanos = 2;
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/acl.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/acl.go
deleted file mode 100644
index 3ade9d40..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/acl.go
+++ /dev/null
@@ -1,672 +0,0 @@
-package acl
-
-import (
- "github.com/armon/go-radix"
-)
-
-var (
- // allowAll is a singleton policy which allows all
- // non-management actions
- allowAll ACL
-
- // denyAll is a singleton policy which denies all actions
- denyAll ACL
-
- // manageAll is a singleton policy which allows all
- // actions, including management
- manageAll ACL
-)
-
-func init() {
- // Setup the singletons
- allowAll = &StaticACL{
- allowManage: false,
- defaultAllow: true,
- }
- denyAll = &StaticACL{
- allowManage: false,
- defaultAllow: false,
- }
- manageAll = &StaticACL{
- allowManage: true,
- defaultAllow: true,
- }
-}
-
-// ACL is the interface for policy enforcement.
-type ACL interface {
- // ACLList checks for permission to list all the ACLs
- ACLList() bool
-
- // ACLModify checks for permission to manipulate ACLs
- ACLModify() bool
-
- // AgentRead checks for permission to read from agent endpoints for a
- // given node.
- AgentRead(string) bool
-
- // AgentWrite checks for permission to make changes via agent endpoints
- // for a given node.
- AgentWrite(string) bool
-
- // EventRead determines if a specific event can be queried.
- EventRead(string) bool
-
- // EventWrite determines if a specific event may be fired.
- EventWrite(string) bool
-
- // KeyRead checks for permission to read a given key
- KeyRead(string) bool
-
- // KeyWrite checks for permission to write a given key
- KeyWrite(string) bool
-
- // KeyWritePrefix checks for permission to write to an
- // entire key prefix. This means there must be no sub-policies
- // that deny a write.
- KeyWritePrefix(string) bool
-
- // KeyringRead determines if the encryption keyring used in
- // the gossip layer can be read.
- KeyringRead() bool
-
- // KeyringWrite determines if the keyring can be manipulated
- KeyringWrite() bool
-
- // NodeRead checks for permission to read (discover) a given node.
- NodeRead(string) bool
-
- // NodeWrite checks for permission to create or update (register) a
- // given node.
- NodeWrite(string) bool
-
- // OperatorRead determines if the read-only Consul operator functions
- // can be used.
- OperatorRead() bool
-
- // OperatorWrite determines if the state-changing Consul operator
- // functions can be used.
- OperatorWrite() bool
-
- // PrepardQueryRead determines if a specific prepared query can be read
- // to show its contents (this is not used for execution).
- PreparedQueryRead(string) bool
-
- // PreparedQueryWrite determines if a specific prepared query can be
- // created, modified, or deleted.
- PreparedQueryWrite(string) bool
-
- // ServiceRead checks for permission to read a given service
- ServiceRead(string) bool
-
- // ServiceWrite checks for permission to create or update a given
- // service
- ServiceWrite(string) bool
-
- // SessionRead checks for permission to read sessions for a given node.
- SessionRead(string) bool
-
- // SessionWrite checks for permission to create sessions for a given
- // node.
- SessionWrite(string) bool
-
- // Snapshot checks for permission to take and restore snapshots.
- Snapshot() bool
-}
-
-// StaticACL is used to implement a base ACL policy. It either
-// allows or denies all requests. This can be used as a parent
-// ACL to act in a blacklist or whitelist mode.
-type StaticACL struct {
- allowManage bool
- defaultAllow bool
-}
-
-func (s *StaticACL) ACLList() bool {
- return s.allowManage
-}
-
-func (s *StaticACL) ACLModify() bool {
- return s.allowManage
-}
-
-func (s *StaticACL) AgentRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) AgentWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) EventRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) EventWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) KeyRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) KeyWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) KeyWritePrefix(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) KeyringRead() bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) KeyringWrite() bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) NodeRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) NodeWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) OperatorRead() bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) OperatorWrite() bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) PreparedQueryRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) PreparedQueryWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) ServiceRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) ServiceWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) SessionRead(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) SessionWrite(string) bool {
- return s.defaultAllow
-}
-
-func (s *StaticACL) Snapshot() bool {
- return s.allowManage
-}
-
-// AllowAll returns an ACL rule that allows all operations
-func AllowAll() ACL {
- return allowAll
-}
-
-// DenyAll returns an ACL rule that denies all operations
-func DenyAll() ACL {
- return denyAll
-}
-
-// ManageAll returns an ACL rule that can manage all resources
-func ManageAll() ACL {
- return manageAll
-}
-
-// RootACL returns a possible ACL if the ID matches a root policy
-func RootACL(id string) ACL {
- switch id {
- case "allow":
- return allowAll
- case "deny":
- return denyAll
- case "manage":
- return manageAll
- default:
- return nil
- }
-}
-
-// PolicyACL is used to wrap a set of ACL policies to provide
-// the ACL interface.
-type PolicyACL struct {
- // parent is used to resolve policy if we have
- // no matching rule.
- parent ACL
-
- // agentRules contains the agent policies
- agentRules *radix.Tree
-
- // keyRules contains the key policies
- keyRules *radix.Tree
-
- // nodeRules contains the node policies
- nodeRules *radix.Tree
-
- // serviceRules contains the service policies
- serviceRules *radix.Tree
-
- // sessionRules contains the session policies
- sessionRules *radix.Tree
-
- // eventRules contains the user event policies
- eventRules *radix.Tree
-
- // preparedQueryRules contains the prepared query policies
- preparedQueryRules *radix.Tree
-
- // keyringRule contains the keyring policies. The keyring has
- // a very simple yes/no without prefix matching, so here we
- // don't need to use a radix tree.
- keyringRule string
-
- // operatorRule contains the operator policies.
- operatorRule string
-}
-
-// New is used to construct a policy based ACL from a set of policies
-// and a parent policy to resolve missing cases.
-func New(parent ACL, policy *Policy) (*PolicyACL, error) {
- p := &PolicyACL{
- parent: parent,
- agentRules: radix.New(),
- keyRules: radix.New(),
- nodeRules: radix.New(),
- serviceRules: radix.New(),
- sessionRules: radix.New(),
- eventRules: radix.New(),
- preparedQueryRules: radix.New(),
- }
-
- // Load the agent policy
- for _, ap := range policy.Agents {
- p.agentRules.Insert(ap.Node, ap.Policy)
- }
-
- // Load the key policy
- for _, kp := range policy.Keys {
- p.keyRules.Insert(kp.Prefix, kp.Policy)
- }
-
- // Load the node policy
- for _, np := range policy.Nodes {
- p.nodeRules.Insert(np.Name, np.Policy)
- }
-
- // Load the service policy
- for _, sp := range policy.Services {
- p.serviceRules.Insert(sp.Name, sp.Policy)
- }
-
- // Load the session policy
- for _, sp := range policy.Sessions {
- p.sessionRules.Insert(sp.Node, sp.Policy)
- }
-
- // Load the event policy
- for _, ep := range policy.Events {
- p.eventRules.Insert(ep.Event, ep.Policy)
- }
-
- // Load the prepared query policy
- for _, pq := range policy.PreparedQueries {
- p.preparedQueryRules.Insert(pq.Prefix, pq.Policy)
- }
-
- // Load the keyring policy
- p.keyringRule = policy.Keyring
-
- // Load the operator policy
- p.operatorRule = policy.Operator
-
- return p, nil
-}
-
-// ACLList checks if listing of ACLs is allowed
-func (p *PolicyACL) ACLList() bool {
- return p.parent.ACLList()
-}
-
-// ACLModify checks if modification of ACLs is allowed
-func (p *PolicyACL) ACLModify() bool {
- return p.parent.ACLModify()
-}
-
-// AgentRead checks for permission to read from agent endpoints for a given
-// node.
-func (p *PolicyACL) AgentRead(node string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.agentRules.LongestPrefix(node)
-
- if ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.AgentRead(node)
-}
-
-// AgentWrite checks for permission to make changes via agent endpoints for a
-// given node.
-func (p *PolicyACL) AgentWrite(node string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.agentRules.LongestPrefix(node)
-
- if ok {
- switch rule {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.AgentWrite(node)
-}
-
-// Snapshot checks if taking and restoring snapshots is allowed.
-func (p *PolicyACL) Snapshot() bool {
- return p.parent.Snapshot()
-}
-
-// EventRead is used to determine if the policy allows for a
-// specific user event to be read.
-func (p *PolicyACL) EventRead(name string) bool {
- // Longest-prefix match on event names
- if _, rule, ok := p.eventRules.LongestPrefix(name); ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // Nothing matched, use parent
- return p.parent.EventRead(name)
-}
-
-// EventWrite is used to determine if new events can be created
-// (fired) by the policy.
-func (p *PolicyACL) EventWrite(name string) bool {
- // Longest-prefix match event names
- if _, rule, ok := p.eventRules.LongestPrefix(name); ok {
- return rule == PolicyWrite
- }
-
- // No match, use parent
- return p.parent.EventWrite(name)
-}
-
-// KeyRead returns if a key is allowed to be read
-func (p *PolicyACL) KeyRead(key string) bool {
- // Look for a matching rule
- _, rule, ok := p.keyRules.LongestPrefix(key)
- if ok {
- switch rule.(string) {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.KeyRead(key)
-}
-
-// KeyWrite returns if a key is allowed to be written
-func (p *PolicyACL) KeyWrite(key string) bool {
- // Look for a matching rule
- _, rule, ok := p.keyRules.LongestPrefix(key)
- if ok {
- switch rule.(string) {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.KeyWrite(key)
-}
-
-// KeyWritePrefix returns if a prefix is allowed to be written
-func (p *PolicyACL) KeyWritePrefix(prefix string) bool {
- // Look for a matching rule that denies
- _, rule, ok := p.keyRules.LongestPrefix(prefix)
- if ok && rule.(string) != PolicyWrite {
- return false
- }
-
- // Look if any of our children have a deny policy
- deny := false
- p.keyRules.WalkPrefix(prefix, func(path string, rule interface{}) bool {
- // We have a rule to prevent a write in a sub-directory!
- if rule.(string) != PolicyWrite {
- deny = true
- return true
- }
- return false
- })
-
- // Deny the write if any sub-rules may be violated
- if deny {
- return false
- }
-
- // If we had a matching rule, done
- if ok {
- return true
- }
-
- // No matching rule, use the parent.
- return p.parent.KeyWritePrefix(prefix)
-}
-
-// KeyringRead is used to determine if the keyring can be
-// read by the current ACL token.
-func (p *PolicyACL) KeyringRead() bool {
- switch p.keyringRule {
- case PolicyRead, PolicyWrite:
- return true
- case PolicyDeny:
- return false
- default:
- return p.parent.KeyringRead()
- }
-}
-
-// KeyringWrite determines if the keyring can be manipulated.
-func (p *PolicyACL) KeyringWrite() bool {
- if p.keyringRule == PolicyWrite {
- return true
- }
- return p.parent.KeyringWrite()
-}
-
-// OperatorRead determines if the read-only operator functions are allowed.
-func (p *PolicyACL) OperatorRead() bool {
- switch p.operatorRule {
- case PolicyRead, PolicyWrite:
- return true
- case PolicyDeny:
- return false
- default:
- return p.parent.OperatorRead()
- }
-}
-
-// NodeRead checks if reading (discovery) of a node is allowed
-func (p *PolicyACL) NodeRead(name string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.nodeRules.LongestPrefix(name)
-
- if ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.NodeRead(name)
-}
-
-// NodeWrite checks if writing (registering) a node is allowed
-func (p *PolicyACL) NodeWrite(name string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.nodeRules.LongestPrefix(name)
-
- if ok {
- switch rule {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.NodeWrite(name)
-}
-
-// OperatorWrite determines if the state-changing operator functions are
-// allowed.
-func (p *PolicyACL) OperatorWrite() bool {
- if p.operatorRule == PolicyWrite {
- return true
- }
- return p.parent.OperatorWrite()
-}
-
-// PreparedQueryRead checks if reading (listing) of a prepared query is
-// allowed - this isn't execution, just listing its contents.
-func (p *PolicyACL) PreparedQueryRead(prefix string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.preparedQueryRules.LongestPrefix(prefix)
-
- if ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.PreparedQueryRead(prefix)
-}
-
-// PreparedQueryWrite checks if writing (creating, updating, or deleting) of a
-// prepared query is allowed.
-func (p *PolicyACL) PreparedQueryWrite(prefix string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.preparedQueryRules.LongestPrefix(prefix)
-
- if ok {
- switch rule {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.PreparedQueryWrite(prefix)
-}
-
-// ServiceRead checks if reading (discovery) of a service is allowed
-func (p *PolicyACL) ServiceRead(name string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.serviceRules.LongestPrefix(name)
-
- if ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.ServiceRead(name)
-}
-
-// ServiceWrite checks if writing (registering) a service is allowed
-func (p *PolicyACL) ServiceWrite(name string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.serviceRules.LongestPrefix(name)
-
- if ok {
- switch rule {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.ServiceWrite(name)
-}
-
-// SessionRead checks for permission to read sessions for a given node.
-func (p *PolicyACL) SessionRead(node string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.sessionRules.LongestPrefix(node)
-
- if ok {
- switch rule {
- case PolicyRead, PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.SessionRead(node)
-}
-
-// SessionWrite checks for permission to create sessions for a given node.
-func (p *PolicyACL) SessionWrite(node string) bool {
- // Check for an exact rule or catch-all
- _, rule, ok := p.sessionRules.LongestPrefix(node)
-
- if ok {
- switch rule {
- case PolicyWrite:
- return true
- default:
- return false
- }
- }
-
- // No matching rule, use the parent.
- return p.parent.SessionWrite(node)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/cache.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/cache.go
deleted file mode 100644
index 0387f9fb..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/cache.go
+++ /dev/null
@@ -1,177 +0,0 @@
-package acl
-
-import (
- "crypto/md5"
- "fmt"
-
- "github.com/hashicorp/golang-lru"
-)
-
-// FaultFunc is a function used to fault in the parent,
-// rules for an ACL given its ID
-type FaultFunc func(id string) (string, string, error)
-
-// aclEntry allows us to store the ACL with it's policy ID
-type aclEntry struct {
- ACL ACL
- Parent string
- RuleID string
-}
-
-// Cache is used to implement policy and ACL caching
-type Cache struct {
- faultfn FaultFunc
- aclCache *lru.TwoQueueCache // Cache id -> acl
- policyCache *lru.TwoQueueCache // Cache policy -> acl
- ruleCache *lru.TwoQueueCache // Cache rules -> policy
-}
-
-// NewCache constructs a new policy and ACL cache of a given size
-func NewCache(size int, faultfn FaultFunc) (*Cache, error) {
- if size <= 0 {
- return nil, fmt.Errorf("Must provide positive cache size")
- }
-
- rc, err := lru.New2Q(size)
- if err != nil {
- return nil, err
- }
-
- pc, err := lru.New2Q(size)
- if err != nil {
- return nil, err
- }
-
- ac, err := lru.New2Q(size)
- if err != nil {
- return nil, err
- }
-
- c := &Cache{
- faultfn: faultfn,
- aclCache: ac,
- policyCache: pc,
- ruleCache: rc,
- }
- return c, nil
-}
-
-// GetPolicy is used to get a potentially cached policy set.
-// If not cached, it will be parsed, and then cached.
-func (c *Cache) GetPolicy(rules string) (*Policy, error) {
- return c.getPolicy(RuleID(rules), rules)
-}
-
-// getPolicy is an internal method to get a cached policy,
-// but it assumes a pre-computed ID
-func (c *Cache) getPolicy(id, rules string) (*Policy, error) {
- raw, ok := c.ruleCache.Get(id)
- if ok {
- return raw.(*Policy), nil
- }
- policy, err := Parse(rules)
- if err != nil {
- return nil, err
- }
- policy.ID = id
- c.ruleCache.Add(id, policy)
- return policy, nil
-
-}
-
-// RuleID is used to generate an ID for a rule
-func RuleID(rules string) string {
- return fmt.Sprintf("%x", md5.Sum([]byte(rules)))
-}
-
-// policyID returns the cache ID for a policy
-func (c *Cache) policyID(parent, ruleID string) string {
- return parent + ":" + ruleID
-}
-
-// GetACLPolicy is used to get the potentially cached ACL
-// policy. If not cached, it will be generated and then cached.
-func (c *Cache) GetACLPolicy(id string) (string, *Policy, error) {
- // Check for a cached acl
- if raw, ok := c.aclCache.Get(id); ok {
- cached := raw.(aclEntry)
- if raw, ok := c.ruleCache.Get(cached.RuleID); ok {
- return cached.Parent, raw.(*Policy), nil
- }
- }
-
- // Fault in the rules
- parent, rules, err := c.faultfn(id)
- if err != nil {
- return "", nil, err
- }
-
- // Get cached
- policy, err := c.GetPolicy(rules)
- return parent, policy, err
-}
-
-// GetACL is used to get a potentially cached ACL policy.
-// If not cached, it will be generated and then cached.
-func (c *Cache) GetACL(id string) (ACL, error) {
- // Look for the ACL directly
- raw, ok := c.aclCache.Get(id)
- if ok {
- return raw.(aclEntry).ACL, nil
- }
-
- // Get the rules
- parentID, rules, err := c.faultfn(id)
- if err != nil {
- return nil, err
- }
- ruleID := RuleID(rules)
-
- // Check for a compiled ACL
- policyID := c.policyID(parentID, ruleID)
- var compiled ACL
- if raw, ok := c.policyCache.Get(policyID); ok {
- compiled = raw.(ACL)
- } else {
- // Get the policy
- policy, err := c.getPolicy(ruleID, rules)
- if err != nil {
- return nil, err
- }
-
- // Get the parent ACL
- parent := RootACL(parentID)
- if parent == nil {
- parent, err = c.GetACL(parentID)
- if err != nil {
- return nil, err
- }
- }
-
- // Compile the ACL
- acl, err := New(parent, policy)
- if err != nil {
- return nil, err
- }
-
- // Cache the compiled ACL
- c.policyCache.Add(policyID, acl)
- compiled = acl
- }
-
- // Cache and return the ACL
- c.aclCache.Add(id, aclEntry{compiled, parentID, ruleID})
- return compiled, nil
-}
-
-// ClearACL is used to clear the ACL cache if any
-func (c *Cache) ClearACL(id string) {
- c.aclCache.Remove(id)
-}
-
-// Purge is used to clear all the ACL caches. The
-// rule and policy caches are not purged, since they
-// are content-hashed anyways.
-func (c *Cache) Purge() {
- c.aclCache.Purge()
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/policy.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/policy.go
deleted file mode 100644
index f7781b81..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/acl/policy.go
+++ /dev/null
@@ -1,191 +0,0 @@
-package acl
-
-import (
- "fmt"
-
- "github.com/hashicorp/hcl"
-)
-
-const (
- PolicyDeny = "deny"
- PolicyRead = "read"
- PolicyWrite = "write"
-)
-
-// Policy is used to represent the policy specified by
-// an ACL configuration.
-type Policy struct {
- ID string `hcl:"-"`
- Agents []*AgentPolicy `hcl:"agent,expand"`
- Keys []*KeyPolicy `hcl:"key,expand"`
- Nodes []*NodePolicy `hcl:"node,expand"`
- Services []*ServicePolicy `hcl:"service,expand"`
- Sessions []*SessionPolicy `hcl:"session,expand"`
- Events []*EventPolicy `hcl:"event,expand"`
- PreparedQueries []*PreparedQueryPolicy `hcl:"query,expand"`
- Keyring string `hcl:"keyring"`
- Operator string `hcl:"operator"`
-}
-
-// AgentPolicy represents a policy for working with agent endpoints on nodes
-// with specific name prefixes.
-type AgentPolicy struct {
- Node string `hcl:",key"`
- Policy string
-}
-
-func (a *AgentPolicy) GoString() string {
- return fmt.Sprintf("%#v", *a)
-}
-
-// KeyPolicy represents a policy for a key
-type KeyPolicy struct {
- Prefix string `hcl:",key"`
- Policy string
-}
-
-func (k *KeyPolicy) GoString() string {
- return fmt.Sprintf("%#v", *k)
-}
-
-// NodePolicy represents a policy for a node
-type NodePolicy struct {
- Name string `hcl:",key"`
- Policy string
-}
-
-func (n *NodePolicy) GoString() string {
- return fmt.Sprintf("%#v", *n)
-}
-
-// ServicePolicy represents a policy for a service
-type ServicePolicy struct {
- Name string `hcl:",key"`
- Policy string
-}
-
-func (s *ServicePolicy) GoString() string {
- return fmt.Sprintf("%#v", *s)
-}
-
-// SessionPolicy represents a policy for making sessions tied to specific node
-// name prefixes.
-type SessionPolicy struct {
- Node string `hcl:",key"`
- Policy string
-}
-
-func (s *SessionPolicy) GoString() string {
- return fmt.Sprintf("%#v", *s)
-}
-
-// EventPolicy represents a user event policy.
-type EventPolicy struct {
- Event string `hcl:",key"`
- Policy string
-}
-
-func (e *EventPolicy) GoString() string {
- return fmt.Sprintf("%#v", *e)
-}
-
-// PreparedQueryPolicy represents a prepared query policy.
-type PreparedQueryPolicy struct {
- Prefix string `hcl:",key"`
- Policy string
-}
-
-func (p *PreparedQueryPolicy) GoString() string {
- return fmt.Sprintf("%#v", *p)
-}
-
-// isPolicyValid makes sure the given string matches one of the valid policies.
-func isPolicyValid(policy string) bool {
- switch policy {
- case PolicyDeny:
- return true
- case PolicyRead:
- return true
- case PolicyWrite:
- return true
- default:
- return false
- }
-}
-
-// Parse is used to parse the specified ACL rules into an
-// intermediary set of policies, before being compiled into
-// the ACL
-func Parse(rules string) (*Policy, error) {
- // Decode the rules
- p := &Policy{}
- if rules == "" {
- // Hot path for empty rules
- return p, nil
- }
-
- if err := hcl.Decode(p, rules); err != nil {
- return nil, fmt.Errorf("Failed to parse ACL rules: %v", err)
- }
-
- // Validate the agent policy
- for _, ap := range p.Agents {
- if !isPolicyValid(ap.Policy) {
- return nil, fmt.Errorf("Invalid agent policy: %#v", ap)
- }
- }
-
- // Validate the key policy
- for _, kp := range p.Keys {
- if !isPolicyValid(kp.Policy) {
- return nil, fmt.Errorf("Invalid key policy: %#v", kp)
- }
- }
-
- // Validate the node policies
- for _, np := range p.Nodes {
- if !isPolicyValid(np.Policy) {
- return nil, fmt.Errorf("Invalid node policy: %#v", np)
- }
- }
-
- // Validate the service policies
- for _, sp := range p.Services {
- if !isPolicyValid(sp.Policy) {
- return nil, fmt.Errorf("Invalid service policy: %#v", sp)
- }
- }
-
- // Validate the session policies
- for _, sp := range p.Sessions {
- if !isPolicyValid(sp.Policy) {
- return nil, fmt.Errorf("Invalid session policy: %#v", sp)
- }
- }
-
- // Validate the user event policies
- for _, ep := range p.Events {
- if !isPolicyValid(ep.Policy) {
- return nil, fmt.Errorf("Invalid event policy: %#v", ep)
- }
- }
-
- // Validate the prepared query policies
- for _, pq := range p.PreparedQueries {
- if !isPolicyValid(pq.Policy) {
- return nil, fmt.Errorf("Invalid query policy: %#v", pq)
- }
- }
-
- // Validate the keyring policy - this one is allowed to be empty
- if p.Keyring != "" && !isPolicyValid(p.Keyring) {
- return nil, fmt.Errorf("Invalid keyring policy: %#v", p.Keyring)
- }
-
- // Validate the operator policy - this one is allowed to be empty
- if p.Operator != "" && !isPolicyValid(p.Operator) {
- return nil, fmt.Errorf("Invalid operator policy: %#v", p.Operator)
- }
-
- return p, nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/acl.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/acl.go
index c3fb0d53..15d1f9f5 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/acl.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/acl.go
@@ -1,5 +1,9 @@
package api
+import (
+ "time"
+)
+
const (
// ACLCLientType is the client type token
ACLClientType = "client"
@@ -18,6 +22,16 @@ type ACLEntry struct {
Rules string
}
+// ACLReplicationStatus is used to represent the status of ACL replication.
+type ACLReplicationStatus struct {
+ Enabled bool
+ Running bool
+ SourceDatacenter string
+ ReplicatedIndex uint64
+ LastSuccess time.Time
+ LastError time.Time
+}
+
// ACL can be used to query the ACL endpoints
type ACL struct {
c *Client
@@ -138,3 +152,24 @@ func (a *ACL) List(q *QueryOptions) ([]*ACLEntry, *QueryMeta, error) {
}
return entries, qm, nil
}
+
+// Replication returns the status of the ACL replication process in the datacenter
+func (a *ACL) Replication(q *QueryOptions) (*ACLReplicationStatus, *QueryMeta, error) {
+ r := a.c.newRequest("GET", "/v1/acl/replication")
+ r.setQueryOptions(q)
+ rtt, resp, err := requireOK(a.c.doRequest(r))
+ if err != nil {
+ return nil, nil, err
+ }
+ defer resp.Body.Close()
+
+ qm := &QueryMeta{}
+ parseQueryMeta(resp, qm)
+ qm.RequestTime = rtt
+
+ var entries *ACLReplicationStatus
+ if err := decodeBody(resp, &entries); err != nil {
+ return nil, nil, err
+ }
+ return entries, qm, nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/agent.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/agent.go
index 1893d1cf..605592db 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/agent.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/agent.go
@@ -25,6 +25,8 @@ type AgentService struct {
Port int
Address string
EnableTagOverride bool
+ CreateIndex uint64
+ ModifyIndex uint64
}
// AgentMember represents a cluster member known to the agent
@@ -65,17 +67,19 @@ type AgentCheckRegistration struct {
// AgentServiceCheck is used to define a node or service level check
type AgentServiceCheck struct {
- Script string `json:",omitempty"`
- DockerContainerID string `json:",omitempty"`
- Shell string `json:",omitempty"` // Only supported for Docker.
- Interval string `json:",omitempty"`
- Timeout string `json:",omitempty"`
- TTL string `json:",omitempty"`
- HTTP string `json:",omitempty"`
- TCP string `json:",omitempty"`
- Status string `json:",omitempty"`
- Notes string `json:",omitempty"`
- TLSSkipVerify bool `json:",omitempty"`
+ Script string `json:",omitempty"`
+ DockerContainerID string `json:",omitempty"`
+ Shell string `json:",omitempty"` // Only supported for Docker.
+ Interval string `json:",omitempty"`
+ Timeout string `json:",omitempty"`
+ TTL string `json:",omitempty"`
+ HTTP string `json:",omitempty"`
+ Header map[string][]string `json:",omitempty"`
+ Method string `json:",omitempty"`
+ TCP string `json:",omitempty"`
+ Status string `json:",omitempty"`
+ Notes string `json:",omitempty"`
+ TLSSkipVerify bool `json:",omitempty"`
// In Consul 0.7 and later, checks that are associated with a service
// may also contain this optional DeregisterCriticalServiceAfter field,
@@ -438,7 +442,7 @@ func (a *Agent) DisableNodeMaintenance() error {
// Monitor returns a channel which will receive streaming logs from the agent
// Providing a non-nil stopCh can be used to close the connection and stop the
// log stream
-func (a *Agent) Monitor(loglevel string, stopCh chan struct{}, q *QueryOptions) (chan string, error) {
+func (a *Agent) Monitor(loglevel string, stopCh <-chan struct{}, q *QueryOptions) (chan string, error) {
r := a.c.newRequest("GET", "/v1/agent/monitor")
r.setQueryOptions(q)
if loglevel != "" {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/api.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/api.go
index 9a59b724..0a62b4f6 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/api.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/api.go
@@ -2,8 +2,8 @@ package api
import (
"bytes"
+ "context"
"crypto/tls"
- "crypto/x509"
"encoding/json"
"fmt"
"io"
@@ -18,6 +18,7 @@ import (
"time"
"github.com/hashicorp/go-cleanhttp"
+ "github.com/hashicorp/go-rootcerts"
)
const (
@@ -37,6 +38,26 @@ const (
// whether or not to use HTTPS.
HTTPSSLEnvName = "CONSUL_HTTP_SSL"
+ // HTTPCAFile defines an environment variable name which sets the
+ // CA file to use for talking to Consul over TLS.
+ HTTPCAFile = "CONSUL_CACERT"
+
+ // HTTPCAPath defines an environment variable name which sets the
+ // path to a directory of CA certs to use for talking to Consul over TLS.
+ HTTPCAPath = "CONSUL_CAPATH"
+
+ // HTTPClientCert defines an environment variable name which sets the
+ // client cert file to use for talking to Consul over TLS.
+ HTTPClientCert = "CONSUL_CLIENT_CERT"
+
+ // HTTPClientKey defines an environment variable name which sets the
+ // client key file to use for talking to Consul over TLS.
+ HTTPClientKey = "CONSUL_CLIENT_KEY"
+
+ // HTTPTLSServerName defines an environment variable name which sets the
+ // server name to use as the SNI host when connecting via TLS
+ HTTPTLSServerName = "CONSUL_TLS_SERVER_NAME"
+
// HTTPSSLVerifyEnvName defines an environment variable name which sets
// whether or not to disable certificate checking.
HTTPSSLVerifyEnvName = "CONSUL_HTTP_SSL_VERIFY"
@@ -79,6 +100,31 @@ type QueryOptions struct {
// metadata key/value pairs. Currently, only one key/value pair can
// be provided for filtering.
NodeMeta map[string]string
+
+ // RelayFactor is used in keyring operations to cause reponses to be
+ // relayed back to the sender through N other random nodes. Must be
+ // a value from 0 to 5 (inclusive).
+ RelayFactor uint8
+
+ // ctx is an optional context pass through to the underlying HTTP
+ // request layer. Use Context() and WithContext() to manage this.
+ ctx context.Context
+}
+
+func (o *QueryOptions) Context() context.Context {
+ if o != nil && o.ctx != nil {
+ return o.ctx
+ }
+ return context.Background()
+}
+
+func (o *QueryOptions) WithContext(ctx context.Context) *QueryOptions {
+ o2 := new(QueryOptions)
+ if o != nil {
+ *o2 = *o
+ }
+ o2.ctx = ctx
+ return o2
}
// WriteOptions are used to parameterize a write
@@ -90,6 +136,31 @@ type WriteOptions struct {
// Token is used to provide a per-request ACL token
// which overrides the agent's default token.
Token string
+
+ // RelayFactor is used in keyring operations to cause reponses to be
+ // relayed back to the sender through N other random nodes. Must be
+ // a value from 0 to 5 (inclusive).
+ RelayFactor uint8
+
+ // ctx is an optional context pass through to the underlying HTTP
+ // request layer. Use Context() and WithContext() to manage this.
+ ctx context.Context
+}
+
+func (o *WriteOptions) Context() context.Context {
+ if o != nil && o.ctx != nil {
+ return o.ctx
+ }
+ return context.Background()
+}
+
+func (o *WriteOptions) WithContext(ctx context.Context) *WriteOptions {
+ o2 := new(WriteOptions)
+ if o != nil {
+ *o2 = *o
+ }
+ o2.ctx = ctx
+ return o2
}
// QueryMeta is used to return meta data about a query
@@ -138,6 +209,9 @@ type Config struct {
// Datacenter to use. If not provided, the default agent datacenter is used.
Datacenter string
+ // Transport is the Transport to use for the http client.
+ Transport *http.Transport
+
// HttpClient is the client to use. Default will be
// used if not provided.
HttpClient *http.Client
@@ -152,6 +226,8 @@ type Config struct {
// Token is used to provide a per-request ACL token
// which overrides the agent's default token.
Token string
+
+ TLSConfig TLSConfig
}
// TLSConfig is used to generate a TLSClientConfig that's useful for talking to
@@ -166,6 +242,10 @@ type TLSConfig struct {
// communication, defaults to the system bundle if not specified.
CAFile string
+ // CAPath is the optional path to a directory of CA certificates to use for
+ // Consul communication, defaults to the system bundle if not specified.
+ CAPath string
+
// CertFile is the optional path to the certificate for Consul
// communication. If this is set then you need to also set KeyFile.
CertFile string
@@ -201,11 +281,9 @@ func DefaultNonPooledConfig() *Config {
// given function to make the transport.
func defaultConfig(transportFn func() *http.Transport) *Config {
config := &Config{
- Address: "127.0.0.1:8500",
- Scheme: "http",
- HttpClient: &http.Client{
- Transport: transportFn(),
- },
+ Address: "127.0.0.1:8500",
+ Scheme: "http",
+ Transport: transportFn(),
}
if addr := os.Getenv(HTTPAddrEnvName); addr != "" {
@@ -243,27 +321,28 @@ func defaultConfig(transportFn func() *http.Transport) *Config {
}
}
- if verify := os.Getenv(HTTPSSLVerifyEnvName); verify != "" {
- doVerify, err := strconv.ParseBool(verify)
+ if v := os.Getenv(HTTPTLSServerName); v != "" {
+ config.TLSConfig.Address = v
+ }
+ if v := os.Getenv(HTTPCAFile); v != "" {
+ config.TLSConfig.CAFile = v
+ }
+ if v := os.Getenv(HTTPCAPath); v != "" {
+ config.TLSConfig.CAPath = v
+ }
+ if v := os.Getenv(HTTPClientCert); v != "" {
+ config.TLSConfig.CertFile = v
+ }
+ if v := os.Getenv(HTTPClientKey); v != "" {
+ config.TLSConfig.KeyFile = v
+ }
+ if v := os.Getenv(HTTPSSLVerifyEnvName); v != "" {
+ doVerify, err := strconv.ParseBool(v)
if err != nil {
log.Printf("[WARN] client: could not parse %s: %s", HTTPSSLVerifyEnvName, err)
}
-
if !doVerify {
- tlsClientConfig, err := SetupTLSConfig(&TLSConfig{
- InsecureSkipVerify: true,
- })
-
- // We don't expect this to fail given that we aren't
- // parsing any of the input, but we panic just in case
- // since this doesn't have an error return.
- if err != nil {
- panic(err)
- }
-
- transport := transportFn()
- transport.TLSClientConfig = tlsClientConfig
- config.HttpClient.Transport = transport
+ config.TLSConfig.InsecureSkipVerify = true
}
}
@@ -298,17 +377,12 @@ func SetupTLSConfig(tlsConfig *TLSConfig) (*tls.Config, error) {
tlsClientConfig.Certificates = []tls.Certificate{tlsCert}
}
- if tlsConfig.CAFile != "" {
- data, err := ioutil.ReadFile(tlsConfig.CAFile)
- if err != nil {
- return nil, fmt.Errorf("failed to read CA file: %v", err)
- }
-
- caPool := x509.NewCertPool()
- if !caPool.AppendCertsFromPEM(data) {
- return nil, fmt.Errorf("failed to parse CA certificate")
- }
- tlsClientConfig.RootCAs = caPool
+ rootConfig := &rootcerts.Config{
+ CAFile: tlsConfig.CAFile,
+ CAPath: tlsConfig.CAPath,
+ }
+ if err := rootcerts.ConfigureTLS(tlsClientConfig, rootConfig); err != nil {
+ return nil, err
}
return tlsClientConfig, nil
@@ -332,17 +406,58 @@ func NewClient(config *Config) (*Client, error) {
config.Scheme = defConfig.Scheme
}
- if config.HttpClient == nil {
- config.HttpClient = defConfig.HttpClient
+ if config.Transport == nil {
+ config.Transport = defConfig.Transport
+ }
+
+ if config.TLSConfig.Address == "" {
+ config.TLSConfig.Address = defConfig.TLSConfig.Address
+ }
+
+ if config.TLSConfig.CAFile == "" {
+ config.TLSConfig.CAFile = defConfig.TLSConfig.CAFile
+ }
+
+ if config.TLSConfig.CAPath == "" {
+ config.TLSConfig.CAPath = defConfig.TLSConfig.CAPath
+ }
+
+ if config.TLSConfig.CertFile == "" {
+ config.TLSConfig.CertFile = defConfig.TLSConfig.CertFile
+ }
+
+ if config.TLSConfig.KeyFile == "" {
+ config.TLSConfig.KeyFile = defConfig.TLSConfig.KeyFile
+ }
+
+ if !config.TLSConfig.InsecureSkipVerify {
+ config.TLSConfig.InsecureSkipVerify = defConfig.TLSConfig.InsecureSkipVerify
}
- if parts := strings.SplitN(config.Address, "unix://", 2); len(parts) == 2 {
- trans := cleanhttp.DefaultTransport()
- trans.Dial = func(_, _ string) (net.Conn, error) {
- return net.Dial("unix", parts[1])
+ if config.HttpClient == nil {
+ var err error
+ config.HttpClient, err = NewHttpClient(config.Transport, config.TLSConfig)
+ if err != nil {
+ return nil, err
}
- config.HttpClient = &http.Client{
- Transport: trans,
+ }
+
+ parts := strings.SplitN(config.Address, "://", 2)
+ if len(parts) == 2 {
+ switch parts[0] {
+ case "http":
+ case "https":
+ config.Scheme = "https"
+ case "unix":
+ trans := cleanhttp.DefaultTransport()
+ trans.DialContext = func(_ context.Context, _, _ string) (net.Conn, error) {
+ return net.Dial("unix", parts[1])
+ }
+ config.HttpClient = &http.Client{
+ Transport: trans,
+ }
+ default:
+ return nil, fmt.Errorf("Unknown protocol scheme: %s", parts[0])
}
config.Address = parts[1]
}
@@ -353,6 +468,26 @@ func NewClient(config *Config) (*Client, error) {
return client, nil
}
+// NewHttpClient returns an http client configured with the given Transport and TLS
+// config.
+func NewHttpClient(transport *http.Transport, tlsConf TLSConfig) (*http.Client, error) {
+ client := &http.Client{
+ Transport: transport,
+ }
+
+ if transport.TLSClientConfig == nil {
+ tlsClientConfig, err := SetupTLSConfig(&tlsConf)
+
+ if err != nil {
+ return nil, err
+ }
+
+ transport.TLSClientConfig = tlsClientConfig
+ }
+
+ return client, nil
+}
+
// request is used to help build up a request
type request struct {
config *Config
@@ -362,6 +497,7 @@ type request struct {
body io.Reader
header http.Header
obj interface{}
+ ctx context.Context
}
// setQueryOptions is used to annotate the request with
@@ -396,6 +532,10 @@ func (r *request) setQueryOptions(q *QueryOptions) {
r.params.Add("node-meta", key+":"+value)
}
}
+ if q.RelayFactor != 0 {
+ r.params.Set("relay-factor", strconv.Itoa(int(q.RelayFactor)))
+ }
+ r.ctx = q.ctx
}
// durToMsec converts a duration to a millisecond specified string. If the
@@ -437,6 +577,10 @@ func (r *request) setWriteOptions(q *WriteOptions) {
if q.Token != "" {
r.header.Set("X-Consul-Token", q.Token)
}
+ if q.RelayFactor != 0 {
+ r.params.Set("relay-factor", strconv.Itoa(int(q.RelayFactor)))
+ }
+ r.ctx = q.ctx
}
// toHTTP converts the request to an HTTP request
@@ -446,11 +590,11 @@ func (r *request) toHTTP() (*http.Request, error) {
// Check if we should encode the body
if r.body == nil && r.obj != nil {
- if b, err := encodeBody(r.obj); err != nil {
+ b, err := encodeBody(r.obj)
+ if err != nil {
return nil, err
- } else {
- r.body = b
}
+ r.body = b
}
// Create the HTTP request
@@ -468,8 +612,11 @@ func (r *request) toHTTP() (*http.Request, error) {
if r.config.HttpAuth != nil {
req.SetBasicAuth(r.config.HttpAuth.Username, r.config.HttpAuth.Password)
}
-
- return req, nil
+ if r.ctx != nil {
+ return req.WithContext(r.ctx), nil
+ } else {
+ return req, nil
+ }
}
// newRequest is used to create a new request
@@ -548,6 +695,8 @@ func (c *Client) write(endpoint string, in, out interface{}, q *WriteOptions) (*
if err := decodeBody(resp, &out); err != nil {
return nil, err
}
+ } else if _, err := ioutil.ReadAll(resp.Body); err != nil {
+ return nil, err
}
return wm, nil
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/catalog.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/catalog.go
index 96226f11..babfc9a1 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/catalog.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/catalog.go
@@ -4,14 +4,18 @@ type Node struct {
ID string
Node string
Address string
+ Datacenter string
TaggedAddresses map[string]string
Meta map[string]string
+ CreateIndex uint64
+ ModifyIndex uint64
}
type CatalogService struct {
ID string
Node string
Address string
+ Datacenter string
TaggedAddresses map[string]string
NodeMeta map[string]string
ServiceID string
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/coordinate.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/coordinate.go
index fdff2075..ae8d16ee 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/coordinate.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/coordinate.go
@@ -10,10 +10,11 @@ type CoordinateEntry struct {
Coord *coordinate.Coordinate
}
-// CoordinateDatacenterMap represents a datacenter and its associated WAN
-// nodes and their associates coordinates.
+// CoordinateDatacenterMap has the coordinates for servers in a given datacenter
+// and area. Network coordinates are only compatible within the same area.
type CoordinateDatacenterMap struct {
Datacenter string
+ AreaID string
Coordinates []CoordinateEntry
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/health.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/health.go
index 8abe2393..38c105fd 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/health.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/health.go
@@ -33,6 +33,7 @@ type HealthCheck struct {
Output string
ServiceID string
ServiceName string
+ ServiceTags []string
}
// HealthChecks is a collection of HealthCheck structs.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/kv.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/kv.go
index 44e06bbb..f91bb50f 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/kv.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/kv.go
@@ -49,17 +49,18 @@ type KVPairs []*KVPair
type KVOp string
const (
- KVSet KVOp = "set"
- KVDelete KVOp = "delete"
- KVDeleteCAS KVOp = "delete-cas"
- KVDeleteTree KVOp = "delete-tree"
- KVCAS KVOp = "cas"
- KVLock KVOp = "lock"
- KVUnlock KVOp = "unlock"
- KVGet KVOp = "get"
- KVGetTree KVOp = "get-tree"
- KVCheckSession KVOp = "check-session"
- KVCheckIndex KVOp = "check-index"
+ KVSet KVOp = "set"
+ KVDelete KVOp = "delete"
+ KVDeleteCAS KVOp = "delete-cas"
+ KVDeleteTree KVOp = "delete-tree"
+ KVCAS KVOp = "cas"
+ KVLock KVOp = "lock"
+ KVUnlock KVOp = "unlock"
+ KVGet KVOp = "get"
+ KVGetTree KVOp = "get-tree"
+ KVCheckSession KVOp = "check-session"
+ KVCheckIndex KVOp = "check-index"
+ KVCheckNotExists KVOp = "check-not-exists"
)
// KVTxnOp defines a single operation inside a transaction.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/lock.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/lock.go
index 9f9845a4..466ef5fd 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/lock.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/lock.go
@@ -143,22 +143,23 @@ func (l *Lock) Lock(stopCh <-chan struct{}) (<-chan struct{}, error) {
// Check if we need to create a session first
l.lockSession = l.opts.Session
if l.lockSession == "" {
- if s, err := l.createSession(); err != nil {
+ s, err := l.createSession()
+ if err != nil {
return nil, fmt.Errorf("failed to create session: %v", err)
- } else {
- l.sessionRenew = make(chan struct{})
- l.lockSession = s
- session := l.c.Session()
- go session.RenewPeriodic(l.opts.SessionTTL, s, nil, l.sessionRenew)
-
- // If we fail to acquire the lock, cleanup the session
- defer func() {
- if !l.isHeld {
- close(l.sessionRenew)
- l.sessionRenew = nil
- }
- }()
}
+
+ l.sessionRenew = make(chan struct{})
+ l.lockSession = s
+ session := l.c.Session()
+ go session.RenewPeriodic(l.opts.SessionTTL, s, nil, l.sessionRenew)
+
+ // If we fail to acquire the lock, cleanup the session
+ defer func() {
+ if !l.isHeld {
+ close(l.sessionRenew)
+ l.sessionRenew = nil
+ }
+ }()
}
// Setup the query options
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator.go
index a8d04a38..079e2248 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator.go
@@ -9,155 +9,3 @@ type Operator struct {
func (c *Client) Operator() *Operator {
return &Operator{c}
}
-
-// RaftServer has information about a server in the Raft configuration.
-type RaftServer struct {
- // ID is the unique ID for the server. These are currently the same
- // as the address, but they will be changed to a real GUID in a future
- // release of Consul.
- ID string
-
- // Node is the node name of the server, as known by Consul, or this
- // will be set to "(unknown)" otherwise.
- Node string
-
- // Address is the IP:port of the server, used for Raft communications.
- Address string
-
- // Leader is true if this server is the current cluster leader.
- Leader bool
-
- // Voter is true if this server has a vote in the cluster. This might
- // be false if the server is staging and still coming online, or if
- // it's a non-voting server, which will be added in a future release of
- // Consul.
- Voter bool
-}
-
-// RaftConfigration is returned when querying for the current Raft configuration.
-type RaftConfiguration struct {
- // Servers has the list of servers in the Raft configuration.
- Servers []*RaftServer
-
- // Index has the Raft index of this configuration.
- Index uint64
-}
-
-// keyringRequest is used for performing Keyring operations
-type keyringRequest struct {
- Key string
-}
-
-// KeyringResponse is returned when listing the gossip encryption keys
-type KeyringResponse struct {
- // Whether this response is for a WAN ring
- WAN bool
-
- // The datacenter name this request corresponds to
- Datacenter string
-
- // A map of the encryption keys to the number of nodes they're installed on
- Keys map[string]int
-
- // The total number of nodes in this ring
- NumNodes int
-}
-
-// RaftGetConfiguration is used to query the current Raft peer set.
-func (op *Operator) RaftGetConfiguration(q *QueryOptions) (*RaftConfiguration, error) {
- r := op.c.newRequest("GET", "/v1/operator/raft/configuration")
- r.setQueryOptions(q)
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return nil, err
- }
- defer resp.Body.Close()
-
- var out RaftConfiguration
- if err := decodeBody(resp, &out); err != nil {
- return nil, err
- }
- return &out, nil
-}
-
-// RaftRemovePeerByAddress is used to kick a stale peer (one that it in the Raft
-// quorum but no longer known to Serf or the catalog) by address in the form of
-// "IP:port".
-func (op *Operator) RaftRemovePeerByAddress(address string, q *WriteOptions) error {
- r := op.c.newRequest("DELETE", "/v1/operator/raft/peer")
- r.setWriteOptions(q)
-
- // TODO (slackpad) Currently we made address a query parameter. Once
- // IDs are in place this will be DELETE /v1/operator/raft/peer/<id>.
- r.params.Set("address", string(address))
-
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return err
- }
-
- resp.Body.Close()
- return nil
-}
-
-// KeyringInstall is used to install a new gossip encryption key into the cluster
-func (op *Operator) KeyringInstall(key string, q *WriteOptions) error {
- r := op.c.newRequest("POST", "/v1/operator/keyring")
- r.setWriteOptions(q)
- r.obj = keyringRequest{
- Key: key,
- }
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return err
- }
- resp.Body.Close()
- return nil
-}
-
-// KeyringList is used to list the gossip keys installed in the cluster
-func (op *Operator) KeyringList(q *QueryOptions) ([]*KeyringResponse, error) {
- r := op.c.newRequest("GET", "/v1/operator/keyring")
- r.setQueryOptions(q)
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return nil, err
- }
- defer resp.Body.Close()
-
- var out []*KeyringResponse
- if err := decodeBody(resp, &out); err != nil {
- return nil, err
- }
- return out, nil
-}
-
-// KeyringRemove is used to remove a gossip encryption key from the cluster
-func (op *Operator) KeyringRemove(key string, q *WriteOptions) error {
- r := op.c.newRequest("DELETE", "/v1/operator/keyring")
- r.setWriteOptions(q)
- r.obj = keyringRequest{
- Key: key,
- }
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return err
- }
- resp.Body.Close()
- return nil
-}
-
-// KeyringUse is used to change the active gossip encryption key
-func (op *Operator) KeyringUse(key string, q *WriteOptions) error {
- r := op.c.newRequest("PUT", "/v1/operator/keyring")
- r.setWriteOptions(q)
- r.obj = keyringRequest{
- Key: key,
- }
- _, resp, err := requireOK(op.c.doRequest(r))
- if err != nil {
- return err
- }
- resp.Body.Close()
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_area.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_area.go
new file mode 100644
index 00000000..7b0e461e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_area.go
@@ -0,0 +1,168 @@
+// The /v1/operator/area endpoints are available only in Consul Enterprise and
+// interact with its network area subsystem. Network areas are used to link
+// together Consul servers in different Consul datacenters. With network areas,
+// Consul datacenters can be linked together in ways other than a fully-connected
+// mesh, as is required for Consul's WAN.
+package api
+
+import (
+ "net"
+ "time"
+)
+
+// Area defines a network area.
+type Area struct {
+ // ID is this identifier for an area (a UUID). This must be left empty
+ // when creating a new area.
+ ID string
+
+ // PeerDatacenter is the peer Consul datacenter that will make up the
+ // other side of this network area. Network areas always involve a pair
+ // of datacenters: the datacenter where the area was created, and the
+ // peer datacenter. This is required.
+ PeerDatacenter string
+
+ // RetryJoin specifies the address of Consul servers to join to, such as
+ // an IPs or hostnames with an optional port number. This is optional.
+ RetryJoin []string
+}
+
+// AreaJoinResponse is returned when a join occurs and gives the result for each
+// address.
+type AreaJoinResponse struct {
+ // The address that was joined.
+ Address string
+
+ // Whether or not the join was a success.
+ Joined bool
+
+ // If we couldn't join, this is the message with information.
+ Error string
+}
+
+// SerfMember is a generic structure for reporting information about members in
+// a Serf cluster. This is only used by the area endpoints right now, but this
+// could be expanded to other endpoints in the future.
+type SerfMember struct {
+ // ID is the node identifier (a UUID).
+ ID string
+
+ // Name is the node name.
+ Name string
+
+ // Addr has the IP address.
+ Addr net.IP
+
+ // Port is the RPC port.
+ Port uint16
+
+ // Datacenter is the DC name.
+ Datacenter string
+
+ // Role is "client", "server", or "unknown".
+ Role string
+
+ // Build has the version of the Consul agent.
+ Build string
+
+ // Protocol is the protocol of the Consul agent.
+ Protocol int
+
+ // Status is the Serf health status "none", "alive", "leaving", "left",
+ // or "failed".
+ Status string
+
+ // RTT is the estimated round trip time from the server handling the
+ // request to the this member. This will be negative if no RTT estimate
+ // is available.
+ RTT time.Duration
+}
+
+// AreaCreate will create a new network area. The ID in the given structure must
+// be empty and a generated ID will be returned on success.
+func (op *Operator) AreaCreate(area *Area, q *WriteOptions) (string, *WriteMeta, error) {
+ r := op.c.newRequest("POST", "/v1/operator/area")
+ r.setWriteOptions(q)
+ r.obj = area
+ rtt, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return "", nil, err
+ }
+ defer resp.Body.Close()
+
+ wm := &WriteMeta{}
+ wm.RequestTime = rtt
+
+ var out struct{ ID string }
+ if err := decodeBody(resp, &out); err != nil {
+ return "", nil, err
+ }
+ return out.ID, wm, nil
+}
+
+// AreaGet returns a single network area.
+func (op *Operator) AreaGet(areaID string, q *QueryOptions) ([]*Area, *QueryMeta, error) {
+ var out []*Area
+ qm, err := op.c.query("/v1/operator/area/"+areaID, &out, q)
+ if err != nil {
+ return nil, nil, err
+ }
+ return out, qm, nil
+}
+
+// AreaList returns all the available network areas.
+func (op *Operator) AreaList(q *QueryOptions) ([]*Area, *QueryMeta, error) {
+ var out []*Area
+ qm, err := op.c.query("/v1/operator/area", &out, q)
+ if err != nil {
+ return nil, nil, err
+ }
+ return out, qm, nil
+}
+
+// AreaDelete deletes the given network area.
+func (op *Operator) AreaDelete(areaID string, q *WriteOptions) (*WriteMeta, error) {
+ r := op.c.newRequest("DELETE", "/v1/operator/area/"+areaID)
+ r.setWriteOptions(q)
+ rtt, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ wm := &WriteMeta{}
+ wm.RequestTime = rtt
+ return wm, nil
+}
+
+// AreaJoin attempts to join the given set of join addresses to the given
+// network area. See the Area structure for details about join addresses.
+func (op *Operator) AreaJoin(areaID string, addresses []string, q *WriteOptions) ([]*AreaJoinResponse, *WriteMeta, error) {
+ r := op.c.newRequest("PUT", "/v1/operator/area/"+areaID+"/join")
+ r.setWriteOptions(q)
+ r.obj = addresses
+ rtt, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, nil, err
+ }
+ defer resp.Body.Close()
+
+ wm := &WriteMeta{}
+ wm.RequestTime = rtt
+
+ var out []*AreaJoinResponse
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, nil, err
+ }
+ return out, wm, nil
+}
+
+// AreaMembers lists the Serf information about the members in the given area.
+func (op *Operator) AreaMembers(areaID string, q *QueryOptions) ([]*SerfMember, *QueryMeta, error) {
+ var out []*SerfMember
+ qm, err := op.c.query("/v1/operator/area/"+areaID+"/members", &out, q)
+ if err != nil {
+ return nil, nil, err
+ }
+ return out, qm, nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_autopilot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_autopilot.go
new file mode 100644
index 00000000..0fa9d160
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_autopilot.go
@@ -0,0 +1,219 @@
+package api
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "strconv"
+ "strings"
+ "time"
+)
+
+// AutopilotConfiguration is used for querying/setting the Autopilot configuration.
+// Autopilot helps manage operator tasks related to Consul servers like removing
+// failed servers from the Raft quorum.
+type AutopilotConfiguration struct {
+ // CleanupDeadServers controls whether to remove dead servers from the Raft
+ // peer list when a new server joins
+ CleanupDeadServers bool
+
+ // LastContactThreshold is the limit on the amount of time a server can go
+ // without leader contact before being considered unhealthy.
+ LastContactThreshold *ReadableDuration
+
+ // MaxTrailingLogs is the amount of entries in the Raft Log that a server can
+ // be behind before being considered unhealthy.
+ MaxTrailingLogs uint64
+
+ // ServerStabilizationTime is the minimum amount of time a server must be
+ // in a stable, healthy state before it can be added to the cluster. Only
+ // applicable with Raft protocol version 3 or higher.
+ ServerStabilizationTime *ReadableDuration
+
+ // (Enterprise-only) RedundancyZoneTag is the node tag to use for separating
+ // servers into zones for redundancy. If left blank, this feature will be disabled.
+ RedundancyZoneTag string
+
+ // (Enterprise-only) DisableUpgradeMigration will disable Autopilot's upgrade migration
+ // strategy of waiting until enough newer-versioned servers have been added to the
+ // cluster before promoting them to voters.
+ DisableUpgradeMigration bool
+
+ // (Enterprise-only) UpgradeVersionTag is the node tag to use for version info when
+ // performing upgrade migrations. If left blank, the Consul version will be used.
+ UpgradeVersionTag string
+
+ // CreateIndex holds the index corresponding the creation of this configuration.
+ // This is a read-only field.
+ CreateIndex uint64
+
+ // ModifyIndex will be set to the index of the last update when retrieving the
+ // Autopilot configuration. Resubmitting a configuration with
+ // AutopilotCASConfiguration will perform a check-and-set operation which ensures
+ // there hasn't been a subsequent update since the configuration was retrieved.
+ ModifyIndex uint64
+}
+
+// ServerHealth is the health (from the leader's point of view) of a server.
+type ServerHealth struct {
+ // ID is the raft ID of the server.
+ ID string
+
+ // Name is the node name of the server.
+ Name string
+
+ // Address is the address of the server.
+ Address string
+
+ // The status of the SerfHealth check for the server.
+ SerfStatus string
+
+ // Version is the Consul version of the server.
+ Version string
+
+ // Leader is whether this server is currently the leader.
+ Leader bool
+
+ // LastContact is the time since this node's last contact with the leader.
+ LastContact *ReadableDuration
+
+ // LastTerm is the highest leader term this server has a record of in its Raft log.
+ LastTerm uint64
+
+ // LastIndex is the last log index this server has a record of in its Raft log.
+ LastIndex uint64
+
+ // Healthy is whether or not the server is healthy according to the current
+ // Autopilot config.
+ Healthy bool
+
+ // Voter is whether this is a voting server.
+ Voter bool
+
+ // StableSince is the last time this server's Healthy value changed.
+ StableSince time.Time
+}
+
+// OperatorHealthReply is a representation of the overall health of the cluster
+type OperatorHealthReply struct {
+ // Healthy is true if all the servers in the cluster are healthy.
+ Healthy bool
+
+ // FailureTolerance is the number of healthy servers that could be lost without
+ // an outage occurring.
+ FailureTolerance int
+
+ // Servers holds the health of each server.
+ Servers []ServerHealth
+}
+
+// ReadableDuration is a duration type that is serialized to JSON in human readable format.
+type ReadableDuration time.Duration
+
+func NewReadableDuration(dur time.Duration) *ReadableDuration {
+ d := ReadableDuration(dur)
+ return &d
+}
+
+func (d *ReadableDuration) String() string {
+ return d.Duration().String()
+}
+
+func (d *ReadableDuration) Duration() time.Duration {
+ if d == nil {
+ return time.Duration(0)
+ }
+ return time.Duration(*d)
+}
+
+func (d *ReadableDuration) MarshalJSON() ([]byte, error) {
+ return []byte(fmt.Sprintf(`"%s"`, d.Duration().String())), nil
+}
+
+func (d *ReadableDuration) UnmarshalJSON(raw []byte) error {
+ if d == nil {
+ return fmt.Errorf("cannot unmarshal to nil pointer")
+ }
+
+ str := string(raw)
+ if len(str) < 2 || str[0] != '"' || str[len(str)-1] != '"' {
+ return fmt.Errorf("must be enclosed with quotes: %s", str)
+ }
+ dur, err := time.ParseDuration(str[1 : len(str)-1])
+ if err != nil {
+ return err
+ }
+ *d = ReadableDuration(dur)
+ return nil
+}
+
+// AutopilotGetConfiguration is used to query the current Autopilot configuration.
+func (op *Operator) AutopilotGetConfiguration(q *QueryOptions) (*AutopilotConfiguration, error) {
+ r := op.c.newRequest("GET", "/v1/operator/autopilot/configuration")
+ r.setQueryOptions(q)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ var out AutopilotConfiguration
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, err
+ }
+
+ return &out, nil
+}
+
+// AutopilotSetConfiguration is used to set the current Autopilot configuration.
+func (op *Operator) AutopilotSetConfiguration(conf *AutopilotConfiguration, q *WriteOptions) error {
+ r := op.c.newRequest("PUT", "/v1/operator/autopilot/configuration")
+ r.setWriteOptions(q)
+ r.obj = conf
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+}
+
+// AutopilotCASConfiguration is used to perform a Check-And-Set update on the
+// Autopilot configuration. The ModifyIndex value will be respected. Returns
+// true on success or false on failures.
+func (op *Operator) AutopilotCASConfiguration(conf *AutopilotConfiguration, q *WriteOptions) (bool, error) {
+ r := op.c.newRequest("PUT", "/v1/operator/autopilot/configuration")
+ r.setWriteOptions(q)
+ r.params.Set("cas", strconv.FormatUint(conf.ModifyIndex, 10))
+ r.obj = conf
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return false, err
+ }
+ defer resp.Body.Close()
+
+ var buf bytes.Buffer
+ if _, err := io.Copy(&buf, resp.Body); err != nil {
+ return false, fmt.Errorf("Failed to read response: %v", err)
+ }
+ res := strings.Contains(string(buf.Bytes()), "true")
+
+ return res, nil
+}
+
+// AutopilotServerHealth
+func (op *Operator) AutopilotServerHealth(q *QueryOptions) (*OperatorHealthReply, error) {
+ r := op.c.newRequest("GET", "/v1/operator/autopilot/health")
+ r.setQueryOptions(q)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ var out OperatorHealthReply
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, err
+ }
+ return &out, nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_keyring.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_keyring.go
new file mode 100644
index 00000000..4f91c354
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_keyring.go
@@ -0,0 +1,83 @@
+package api
+
+// keyringRequest is used for performing Keyring operations
+type keyringRequest struct {
+ Key string
+}
+
+// KeyringResponse is returned when listing the gossip encryption keys
+type KeyringResponse struct {
+ // Whether this response is for a WAN ring
+ WAN bool
+
+ // The datacenter name this request corresponds to
+ Datacenter string
+
+ // A map of the encryption keys to the number of nodes they're installed on
+ Keys map[string]int
+
+ // The total number of nodes in this ring
+ NumNodes int
+}
+
+// KeyringInstall is used to install a new gossip encryption key into the cluster
+func (op *Operator) KeyringInstall(key string, q *WriteOptions) error {
+ r := op.c.newRequest("POST", "/v1/operator/keyring")
+ r.setWriteOptions(q)
+ r.obj = keyringRequest{
+ Key: key,
+ }
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+}
+
+// KeyringList is used to list the gossip keys installed in the cluster
+func (op *Operator) KeyringList(q *QueryOptions) ([]*KeyringResponse, error) {
+ r := op.c.newRequest("GET", "/v1/operator/keyring")
+ r.setQueryOptions(q)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ var out []*KeyringResponse
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// KeyringRemove is used to remove a gossip encryption key from the cluster
+func (op *Operator) KeyringRemove(key string, q *WriteOptions) error {
+ r := op.c.newRequest("DELETE", "/v1/operator/keyring")
+ r.setWriteOptions(q)
+ r.obj = keyringRequest{
+ Key: key,
+ }
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+}
+
+// KeyringUse is used to change the active gossip encryption key
+func (op *Operator) KeyringUse(key string, q *WriteOptions) error {
+ r := op.c.newRequest("PUT", "/v1/operator/keyring")
+ r.setWriteOptions(q)
+ r.obj = keyringRequest{
+ Key: key,
+ }
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_raft.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_raft.go
new file mode 100644
index 00000000..5f3c25b1
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/operator_raft.go
@@ -0,0 +1,86 @@
+package api
+
+// RaftServer has information about a server in the Raft configuration.
+type RaftServer struct {
+ // ID is the unique ID for the server. These are currently the same
+ // as the address, but they will be changed to a real GUID in a future
+ // release of Consul.
+ ID string
+
+ // Node is the node name of the server, as known by Consul, or this
+ // will be set to "(unknown)" otherwise.
+ Node string
+
+ // Address is the IP:port of the server, used for Raft communications.
+ Address string
+
+ // Leader is true if this server is the current cluster leader.
+ Leader bool
+
+ // Voter is true if this server has a vote in the cluster. This might
+ // be false if the server is staging and still coming online, or if
+ // it's a non-voting server, which will be added in a future release of
+ // Consul.
+ Voter bool
+}
+
+// RaftConfigration is returned when querying for the current Raft configuration.
+type RaftConfiguration struct {
+ // Servers has the list of servers in the Raft configuration.
+ Servers []*RaftServer
+
+ // Index has the Raft index of this configuration.
+ Index uint64
+}
+
+// RaftGetConfiguration is used to query the current Raft peer set.
+func (op *Operator) RaftGetConfiguration(q *QueryOptions) (*RaftConfiguration, error) {
+ r := op.c.newRequest("GET", "/v1/operator/raft/configuration")
+ r.setQueryOptions(q)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ var out RaftConfiguration
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, err
+ }
+ return &out, nil
+}
+
+// RaftRemovePeerByAddress is used to kick a stale peer (one that it in the Raft
+// quorum but no longer known to Serf or the catalog) by address in the form of
+// "IP:port".
+func (op *Operator) RaftRemovePeerByAddress(address string, q *WriteOptions) error {
+ r := op.c.newRequest("DELETE", "/v1/operator/raft/peer")
+ r.setWriteOptions(q)
+
+ r.params.Set("address", string(address))
+
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+
+ resp.Body.Close()
+ return nil
+}
+
+// RaftRemovePeerByID is used to kick a stale peer (one that it in the Raft
+// quorum but no longer known to Serf or the catalog) by ID.
+func (op *Operator) RaftRemovePeerByID(id string, q *WriteOptions) error {
+ r := op.c.newRequest("DELETE", "/v1/operator/raft/peer")
+ r.setWriteOptions(q)
+
+ r.params.Set("id", string(id))
+
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+
+ resp.Body.Close()
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/semaphore.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/semaphore.go
index e6645ac1..9ddbdc49 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/semaphore.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/semaphore.go
@@ -155,22 +155,23 @@ func (s *Semaphore) Acquire(stopCh <-chan struct{}) (<-chan struct{}, error) {
// Check if we need to create a session first
s.lockSession = s.opts.Session
if s.lockSession == "" {
- if sess, err := s.createSession(); err != nil {
+ sess, err := s.createSession()
+ if err != nil {
return nil, fmt.Errorf("failed to create session: %v", err)
- } else {
- s.sessionRenew = make(chan struct{})
- s.lockSession = sess
- session := s.c.Session()
- go session.RenewPeriodic(s.opts.SessionTTL, sess, nil, s.sessionRenew)
-
- // If we fail to acquire the lock, cleanup the session
- defer func() {
- if !s.isHeld {
- close(s.sessionRenew)
- s.sessionRenew = nil
- }
- }()
}
+
+ s.sessionRenew = make(chan struct{})
+ s.lockSession = sess
+ session := s.c.Session()
+ go session.RenewPeriodic(s.opts.SessionTTL, sess, nil, s.sessionRenew)
+
+ // If we fail to acquire the lock, cleanup the session
+ defer func() {
+ if !s.isHeld {
+ close(s.sessionRenew)
+ s.sessionRenew = nil
+ }
+ }()
}
// Create the contender entry
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/session.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/session.go
index 36e99a38..1613f11a 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/session.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/api/session.go
@@ -145,7 +145,9 @@ func (s *Session) Renew(id string, q *WriteOptions) (*SessionEntry, *WriteMeta,
// RenewPeriodic is used to periodically invoke Session.Renew on a
// session until a doneCh is closed. This is meant to be used in a long running
// goroutine to ensure a session stays valid.
-func (s *Session) RenewPeriodic(initialTTL string, id string, q *WriteOptions, doneCh chan struct{}) error {
+func (s *Session) RenewPeriodic(initialTTL string, id string, q *WriteOptions, doneCh <-chan struct{}) error {
+ ctx := q.Context()
+
ttl, err := time.ParseDuration(initialTTL)
if err != nil {
return err
@@ -179,6 +181,11 @@ func (s *Session) RenewPeriodic(initialTTL string, id string, q *WriteOptions, d
// Attempt a session destroy
s.Destroy(id, q)
return nil
+
+ case <-ctx.Done():
+ // Bail immediately since attempting the destroy would
+ // use the canceled context in q, which would just bail.
+ return ctx.Err()
}
}
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/operator.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/operator.go
deleted file mode 100644
index d564400b..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/operator.go
+++ /dev/null
@@ -1,57 +0,0 @@
-package structs
-
-import (
- "github.com/hashicorp/raft"
-)
-
-// RaftServer has information about a server in the Raft configuration.
-type RaftServer struct {
- // ID is the unique ID for the server. These are currently the same
- // as the address, but they will be changed to a real GUID in a future
- // release of Consul.
- ID raft.ServerID
-
- // Node is the node name of the server, as known by Consul, or this
- // will be set to "(unknown)" otherwise.
- Node string
-
- // Address is the IP:port of the server, used for Raft communications.
- Address raft.ServerAddress
-
- // Leader is true if this server is the current cluster leader.
- Leader bool
-
- // Voter is true if this server has a vote in the cluster. This might
- // be false if the server is staging and still coming online, or if
- // it's a non-voting server, which will be added in a future release of
- // Consul.
- Voter bool
-}
-
-// RaftConfigrationResponse is returned when querying for the current Raft
-// configuration.
-type RaftConfigurationResponse struct {
- // Servers has the list of servers in the Raft configuration.
- Servers []*RaftServer
-
- // Index has the Raft index of this configuration.
- Index uint64
-}
-
-// RaftPeerByAddressRequest is used by the Operator endpoint to apply a Raft
-// operation on a specific Raft peer by address in the form of "IP:port".
-type RaftPeerByAddressRequest struct {
- // Datacenter is the target this request is intended for.
- Datacenter string
-
- // Address is the peer to remove, in the form "IP:port".
- Address raft.ServerAddress
-
- // WriteRequest holds the ACL token to go along with this request.
- WriteRequest
-}
-
-// RequestDatacenter returns the datacenter for a given request.
-func (op *RaftPeerByAddressRequest) RequestDatacenter() string {
- return op.Datacenter
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go
deleted file mode 100644
index af535f01..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/prepared_query.go
+++ /dev/null
@@ -1,257 +0,0 @@
-package structs
-
-// QueryDatacenterOptions sets options about how we fail over if there are no
-// healthy nodes in the local datacenter.
-type QueryDatacenterOptions struct {
- // NearestN is set to the number of remote datacenters to try, based on
- // network coordinates.
- NearestN int
-
- // Datacenters is a fixed list of datacenters to try after NearestN. We
- // never try a datacenter multiple times, so those are subtracted from
- // this list before proceeding.
- Datacenters []string
-}
-
-// QueryDNSOptions controls settings when query results are served over DNS.
-type QueryDNSOptions struct {
- // TTL is the time to live for the served DNS results.
- TTL string
-}
-
-// ServiceQuery is used to query for a set of healthy nodes offering a specific
-// service.
-type ServiceQuery struct {
- // Service is the service to query.
- Service string
-
- // Failover controls what we do if there are no healthy nodes in the
- // local datacenter.
- Failover QueryDatacenterOptions
-
- // If OnlyPassing is true then we will only include nodes with passing
- // health checks (critical AND warning checks will cause a node to be
- // discarded)
- OnlyPassing bool
-
- // Near allows the query to always prefer the node nearest the given
- // node. If the node does not exist, results are returned in their
- // normal randomly-shuffled order. Supplying the magic "_agent" value
- // is supported to sort near the agent which initiated the request.
- Near string
-
- // Tags are a set of required and/or disallowed tags. If a tag is in
- // this list it must be present. If the tag is preceded with "!" then
- // it is disallowed.
- Tags []string
-
- // NodeMeta is a map of required node metadata fields. If a key/value
- // pair is in this map it must be present on the node in order for the
- // service entry to be returned.
- NodeMeta map[string]string
-}
-
-const (
- // QueryTemplateTypeNamePrefixMatch uses the Name field of the query as
- // a prefix to select the template.
- QueryTemplateTypeNamePrefixMatch = "name_prefix_match"
-)
-
-// QueryTemplateOptions controls settings if this query is a template.
-type QueryTemplateOptions struct {
- // Type, if non-empty, means that this query is a template. This is
- // set to one of the QueryTemplateType* constants above.
- Type string
-
- // Regexp is an optional regular expression to use to parse the full
- // name, once the prefix match has selected a template. This can be
- // used to extract parts of the name and choose a service name, set
- // tags, etc.
- Regexp string
-}
-
-// PreparedQuery defines a complete prepared query, and is the structure we
-// maintain in the state store.
-type PreparedQuery struct {
- // ID is this UUID-based ID for the query, always generated by Consul.
- ID string
-
- // Name is an optional friendly name for the query supplied by the
- // user. NOTE - if this feature is used then it will reduce the security
- // of any read ACL associated with this query/service since this name
- // can be used to locate nodes with supplying any ACL.
- Name string
-
- // Session is an optional session to tie this query's lifetime to. If
- // this is omitted then the query will not expire.
- Session string
-
- // Token is the ACL token used when the query was created, and it is
- // used when a query is subsequently executed. This token, or a token
- // with management privileges, must be used to change the query later.
- Token string
-
- // Template is used to configure this query as a template, which will
- // respond to queries based on the Name, and then will be rendered
- // before it is executed.
- Template QueryTemplateOptions
-
- // Service defines a service query (leaving things open for other types
- // later).
- Service ServiceQuery
-
- // DNS has options that control how the results of this query are
- // served over DNS.
- DNS QueryDNSOptions
-
- RaftIndex
-}
-
-// GetACLPrefix returns the prefix to look up the prepared_query ACL policy for
-// this query, and whether the prefix applies to this query. You always need to
-// check the ok value before using the prefix.
-func (pq *PreparedQuery) GetACLPrefix() (string, bool) {
- if pq.Name != "" || pq.Template.Type != "" {
- return pq.Name, true
- }
-
- return "", false
-}
-
-type PreparedQueries []*PreparedQuery
-
-type IndexedPreparedQueries struct {
- Queries PreparedQueries
- QueryMeta
-}
-
-type PreparedQueryOp string
-
-const (
- PreparedQueryCreate PreparedQueryOp = "create"
- PreparedQueryUpdate PreparedQueryOp = "update"
- PreparedQueryDelete PreparedQueryOp = "delete"
-)
-
-// QueryRequest is used to create or change prepared queries.
-type PreparedQueryRequest struct {
- // Datacenter is the target this request is intended for.
- Datacenter string
-
- // Op is the operation to apply.
- Op PreparedQueryOp
-
- // Query is the query itself.
- Query *PreparedQuery
-
- // WriteRequest holds the ACL token to go along with this request.
- WriteRequest
-}
-
-// RequestDatacenter returns the datacenter for a given request.
-func (q *PreparedQueryRequest) RequestDatacenter() string {
- return q.Datacenter
-}
-
-// PreparedQuerySpecificRequest is used to get information about a prepared
-// query.
-type PreparedQuerySpecificRequest struct {
- // Datacenter is the target this request is intended for.
- Datacenter string
-
- // QueryID is the ID of a query.
- QueryID string
-
- // QueryOptions (unfortunately named here) controls the consistency
- // settings for the query lookup itself, as well as the service lookups.
- QueryOptions
-}
-
-// RequestDatacenter returns the datacenter for a given request.
-func (q *PreparedQuerySpecificRequest) RequestDatacenter() string {
- return q.Datacenter
-}
-
-// PreparedQueryExecuteRequest is used to execute a prepared query.
-type PreparedQueryExecuteRequest struct {
- // Datacenter is the target this request is intended for.
- Datacenter string
-
- // QueryIDOrName is the ID of a query _or_ the name of one, either can
- // be provided.
- QueryIDOrName string
-
- // Limit will trim the resulting list down to the given limit.
- Limit int
-
- // Source is used to sort the results relative to a given node using
- // network coordinates.
- Source QuerySource
-
- // Agent is used to carry around a reference to the agent which initiated
- // the execute request. Used to distance-sort relative to the local node.
- Agent QuerySource
-
- // QueryOptions (unfortunately named here) controls the consistency
- // settings for the query lookup itself, as well as the service lookups.
- QueryOptions
-}
-
-// RequestDatacenter returns the datacenter for a given request.
-func (q *PreparedQueryExecuteRequest) RequestDatacenter() string {
- return q.Datacenter
-}
-
-// PreparedQueryExecuteRemoteRequest is used when running a local query in a
-// remote datacenter.
-type PreparedQueryExecuteRemoteRequest struct {
- // Datacenter is the target this request is intended for.
- Datacenter string
-
- // Query is a copy of the query to execute. We have to ship the entire
- // query over since it won't be present in the remote state store.
- Query PreparedQuery
-
- // Limit will trim the resulting list down to the given limit.
- Limit int
-
- // QueryOptions (unfortunately named here) controls the consistency
- // settings for the the service lookups.
- QueryOptions
-}
-
-// RequestDatacenter returns the datacenter for a given request.
-func (q *PreparedQueryExecuteRemoteRequest) RequestDatacenter() string {
- return q.Datacenter
-}
-
-// PreparedQueryExecuteResponse has the results of executing a query.
-type PreparedQueryExecuteResponse struct {
- // Service is the service that was queried.
- Service string
-
- // Nodes has the nodes that were output by the query.
- Nodes CheckServiceNodes
-
- // DNS has the options for serving these results over DNS.
- DNS QueryDNSOptions
-
- // Datacenter is the datacenter that these results came from.
- Datacenter string
-
- // Failovers is a count of how many times we had to query a remote
- // datacenter.
- Failovers int
-
- // QueryMeta has freshness information about the query.
- QueryMeta
-}
-
-// PreparedQueryExplainResponse has the results when explaining a query/
-type PreparedQueryExplainResponse struct {
- // Query has the fully-rendered query.
- Query PreparedQuery
-
- // QueryMeta has freshness information about the query.
- QueryMeta
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go
deleted file mode 100644
index 3d65e317..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/snapshot.go
+++ /dev/null
@@ -1,40 +0,0 @@
-package structs
-
-type SnapshotOp int
-
-const (
- SnapshotSave SnapshotOp = iota
- SnapshotRestore
-)
-
-// SnapshotRequest is used as a header for a snapshot RPC request. This will
-// precede any streaming data that's part of the request and is JSON-encoded on
-// the wire.
-type SnapshotRequest struct {
- // Datacenter is the target datacenter for this request. The request
- // will be forwarded if necessary.
- Datacenter string
-
- // Token is the ACL token to use for the operation. If ACLs are enabled
- // then all operations require a management token.
- Token string
-
- // If set, any follower can service the request. Results may be
- // arbitrarily stale. Only applies to SnapshotSave.
- AllowStale bool
-
- // Op is the operation code for the RPC.
- Op SnapshotOp
-}
-
-// SnapshotResponse is used header for a snapshot RPC response. This will
-// precede any streaming data that's part of the request and is JSON-encoded on
-// the wire.
-type SnapshotResponse struct {
- // Error is the overall error status of the RPC request.
- Error string
-
- // QueryMeta has freshness information about the server that handled the
- // request. It is only filled in for a SnapshotSave.
- QueryMeta
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/structs.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/structs.go
deleted file mode 100644
index 13c67b3d..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/structs.go
+++ /dev/null
@@ -1,1041 +0,0 @@
-package structs
-
-import (
- "bytes"
- "fmt"
- "math/rand"
- "reflect"
- "time"
-
- "github.com/hashicorp/consul/acl"
- "github.com/hashicorp/consul/types"
- "github.com/hashicorp/go-msgpack/codec"
- "github.com/hashicorp/serf/coordinate"
- "regexp"
- "strings"
-)
-
-var (
- ErrNoLeader = fmt.Errorf("No cluster leader")
- ErrNoDCPath = fmt.Errorf("No path to datacenter")
- ErrNoServers = fmt.Errorf("No known Consul servers")
-)
-
-type MessageType uint8
-
-// RaftIndex is used to track the index used while creating
-// or modifying a given struct type.
-type RaftIndex struct {
- CreateIndex uint64
- ModifyIndex uint64
-}
-
-const (
- RegisterRequestType MessageType = iota
- DeregisterRequestType
- KVSRequestType
- SessionRequestType
- ACLRequestType
- TombstoneRequestType
- CoordinateBatchUpdateType
- PreparedQueryRequestType
- TxnRequestType
-)
-
-const (
- // IgnoreUnknownTypeFlag is set along with a MessageType
- // to indicate that the message type can be safely ignored
- // if it is not recognized. This is for future proofing, so
- // that new commands can be added in a way that won't cause
- // old servers to crash when the FSM attempts to process them.
- IgnoreUnknownTypeFlag MessageType = 128
-)
-
-const (
- // HealthAny is special, and is used as a wild card,
- // not as a specific state.
- HealthAny = "any"
- HealthPassing = "passing"
- HealthWarning = "warning"
- HealthCritical = "critical"
- HealthMaint = "maintenance"
-)
-
-const (
- // NodeMaint is the special key set by a node in maintenance mode.
- NodeMaint = "_node_maintenance"
-
- // ServiceMaintPrefix is the prefix for a service in maintenance mode.
- ServiceMaintPrefix = "_service_maintenance:"
-)
-
-const (
- // The meta key prefix reserved for Consul's internal use
- metaKeyReservedPrefix = "consul-"
-
- // The maximum number of metadata key pairs allowed to be registered
- metaMaxKeyPairs = 64
-
- // The maximum allowed length of a metadata key
- metaKeyMaxLength = 128
-
- // The maximum allowed length of a metadata value
- metaValueMaxLength = 512
-)
-
-var (
- // metaKeyFormat checks if a metadata key string is valid
- metaKeyFormat = regexp.MustCompile(`^[a-zA-Z0-9_-]+$`).MatchString
-)
-
-func ValidStatus(s string) bool {
- return s == HealthPassing ||
- s == HealthWarning ||
- s == HealthCritical
-}
-
-const (
- // Client tokens have rules applied
- ACLTypeClient = "client"
-
- // Management tokens have an always allow policy.
- // They are used for token management.
- ACLTypeManagement = "management"
-)
-
-const (
- // MaxLockDelay provides a maximum LockDelay value for
- // a session. Any value above this will not be respected.
- MaxLockDelay = 60 * time.Second
-)
-
-// RPCInfo is used to describe common information about query
-type RPCInfo interface {
- RequestDatacenter() string
- IsRead() bool
- AllowStaleRead() bool
- ACLToken() string
-}
-
-// QueryOptions is used to specify various flags for read queries
-type QueryOptions struct {
- // Token is the ACL token ID. If not provided, the 'anonymous'
- // token is assumed for backwards compatibility.
- Token string
-
- // If set, wait until query exceeds given index. Must be provided
- // with MaxQueryTime.
- MinQueryIndex uint64
-
- // Provided with MinQueryIndex to wait for change.
- MaxQueryTime time.Duration
-
- // If set, any follower can service the request. Results
- // may be arbitrarily stale.
- AllowStale bool
-
- // If set, the leader must verify leadership prior to
- // servicing the request. Prevents a stale read.
- RequireConsistent bool
-}
-
-// QueryOption only applies to reads, so always true
-func (q QueryOptions) IsRead() bool {
- return true
-}
-
-func (q QueryOptions) AllowStaleRead() bool {
- return q.AllowStale
-}
-
-func (q QueryOptions) ACLToken() string {
- return q.Token
-}
-
-type WriteRequest struct {
- // Token is the ACL token ID. If not provided, the 'anonymous'
- // token is assumed for backwards compatibility.
- Token string
-}
-
-// WriteRequest only applies to writes, always false
-func (w WriteRequest) IsRead() bool {
- return false
-}
-
-func (w WriteRequest) AllowStaleRead() bool {
- return false
-}
-
-func (w WriteRequest) ACLToken() string {
- return w.Token
-}
-
-// QueryMeta allows a query response to include potentially
-// useful metadata about a query
-type QueryMeta struct {
- // This is the index associated with the read
- Index uint64
-
- // If AllowStale is used, this is time elapsed since
- // last contact between the follower and leader. This
- // can be used to gauge staleness.
- LastContact time.Duration
-
- // Used to indicate if there is a known leader node
- KnownLeader bool
-}
-
-// RegisterRequest is used for the Catalog.Register endpoint
-// to register a node as providing a service. If no service
-// is provided, the node is registered.
-type RegisterRequest struct {
- Datacenter string
- ID types.NodeID
- Node string
- Address string
- TaggedAddresses map[string]string
- NodeMeta map[string]string
- Service *NodeService
- Check *HealthCheck
- Checks HealthChecks
- WriteRequest
-}
-
-func (r *RegisterRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// ChangesNode returns true if the given register request changes the given
-// node, which can be nil. This only looks for changes to the node record itself,
-// not any of the health checks.
-func (r *RegisterRequest) ChangesNode(node *Node) bool {
- // This means it's creating the node.
- if node == nil {
- return true
- }
-
- // Check if any of the node-level fields are being changed.
- if r.ID != node.ID ||
- r.Node != node.Node ||
- r.Address != node.Address ||
- !reflect.DeepEqual(r.TaggedAddresses, node.TaggedAddresses) ||
- !reflect.DeepEqual(r.NodeMeta, node.Meta) {
- return true
- }
-
- return false
-}
-
-// DeregisterRequest is used for the Catalog.Deregister endpoint
-// to deregister a node as providing a service. If no service is
-// provided the entire node is deregistered.
-type DeregisterRequest struct {
- Datacenter string
- Node string
- ServiceID string
- CheckID types.CheckID
- WriteRequest
-}
-
-func (r *DeregisterRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// QuerySource is used to pass along information about the source node
-// in queries so that we can adjust the response based on its network
-// coordinates.
-type QuerySource struct {
- Datacenter string
- Node string
-}
-
-// DCSpecificRequest is used to query about a specific DC
-type DCSpecificRequest struct {
- Datacenter string
- NodeMetaFilters map[string]string
- Source QuerySource
- QueryOptions
-}
-
-func (r *DCSpecificRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// ServiceSpecificRequest is used to query about a specific service
-type ServiceSpecificRequest struct {
- Datacenter string
- NodeMetaFilters map[string]string
- ServiceName string
- ServiceTag string
- TagFilter bool // Controls tag filtering
- Source QuerySource
- QueryOptions
-}
-
-func (r *ServiceSpecificRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// NodeSpecificRequest is used to request the information about a single node
-type NodeSpecificRequest struct {
- Datacenter string
- Node string
- QueryOptions
-}
-
-func (r *NodeSpecificRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// ChecksInStateRequest is used to query for nodes in a state
-type ChecksInStateRequest struct {
- Datacenter string
- NodeMetaFilters map[string]string
- State string
- Source QuerySource
- QueryOptions
-}
-
-func (r *ChecksInStateRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// Used to return information about a node
-type Node struct {
- ID types.NodeID
- Node string
- Address string
- TaggedAddresses map[string]string
- Meta map[string]string
-
- RaftIndex
-}
-type Nodes []*Node
-
-// ValidateMeta validates a set of key/value pairs from the agent config
-func ValidateMetadata(meta map[string]string) error {
- if len(meta) > metaMaxKeyPairs {
- return fmt.Errorf("Node metadata cannot contain more than %d key/value pairs", metaMaxKeyPairs)
- }
-
- for key, value := range meta {
- if err := validateMetaPair(key, value); err != nil {
- return fmt.Errorf("Couldn't load metadata pair ('%s', '%s'): %s", key, value, err)
- }
- }
-
- return nil
-}
-
-// validateMetaPair checks that the given key/value pair is in a valid format
-func validateMetaPair(key, value string) error {
- if key == "" {
- return fmt.Errorf("Key cannot be blank")
- }
- if !metaKeyFormat(key) {
- return fmt.Errorf("Key contains invalid characters")
- }
- if len(key) > metaKeyMaxLength {
- return fmt.Errorf("Key is too long (limit: %d characters)", metaKeyMaxLength)
- }
- if strings.HasPrefix(key, metaKeyReservedPrefix) {
- return fmt.Errorf("Key prefix '%s' is reserved for internal use", metaKeyReservedPrefix)
- }
- if len(value) > metaValueMaxLength {
- return fmt.Errorf("Value is too long (limit: %d characters)", metaValueMaxLength)
- }
- return nil
-}
-
-// SatisfiesMetaFilters returns true if the metadata map contains the given filters
-func SatisfiesMetaFilters(meta map[string]string, filters map[string]string) bool {
- for key, value := range filters {
- if v, ok := meta[key]; !ok || v != value {
- return false
- }
- }
- return true
-}
-
-// Used to return information about a provided services.
-// Maps service name to available tags
-type Services map[string][]string
-
-// ServiceNode represents a node that is part of a service. ID, Address,
-// TaggedAddresses, and NodeMeta are node-related fields that are always empty
-// in the state store and are filled in on the way out by parseServiceNodes().
-// This is also why PartialClone() skips them, because we know they are blank
-// already so it would be a waste of time to copy them.
-type ServiceNode struct {
- ID types.NodeID
- Node string
- Address string
- TaggedAddresses map[string]string
- NodeMeta map[string]string
- ServiceID string
- ServiceName string
- ServiceTags []string
- ServiceAddress string
- ServicePort int
- ServiceEnableTagOverride bool
-
- RaftIndex
-}
-
-// PartialClone() returns a clone of the given service node, minus the node-
-// related fields that get filled in later, Address and TaggedAddresses.
-func (s *ServiceNode) PartialClone() *ServiceNode {
- tags := make([]string, len(s.ServiceTags))
- copy(tags, s.ServiceTags)
-
- return &ServiceNode{
- // Skip ID, see above.
- Node: s.Node,
- // Skip Address, see above.
- // Skip TaggedAddresses, see above.
- ServiceID: s.ServiceID,
- ServiceName: s.ServiceName,
- ServiceTags: tags,
- ServiceAddress: s.ServiceAddress,
- ServicePort: s.ServicePort,
- ServiceEnableTagOverride: s.ServiceEnableTagOverride,
- RaftIndex: RaftIndex{
- CreateIndex: s.CreateIndex,
- ModifyIndex: s.ModifyIndex,
- },
- }
-}
-
-// ToNodeService converts the given service node to a node service.
-func (s *ServiceNode) ToNodeService() *NodeService {
- return &NodeService{
- ID: s.ServiceID,
- Service: s.ServiceName,
- Tags: s.ServiceTags,
- Address: s.ServiceAddress,
- Port: s.ServicePort,
- EnableTagOverride: s.ServiceEnableTagOverride,
- RaftIndex: RaftIndex{
- CreateIndex: s.CreateIndex,
- ModifyIndex: s.ModifyIndex,
- },
- }
-}
-
-type ServiceNodes []*ServiceNode
-
-// NodeService is a service provided by a node
-type NodeService struct {
- ID string
- Service string
- Tags []string
- Address string
- Port int
- EnableTagOverride bool
-
- RaftIndex
-}
-
-// IsSame checks if one NodeService is the same as another, without looking
-// at the Raft information (that's why we didn't call it IsEqual). This is
-// useful for seeing if an update would be idempotent for all the functional
-// parts of the structure.
-func (s *NodeService) IsSame(other *NodeService) bool {
- if s.ID != other.ID ||
- s.Service != other.Service ||
- !reflect.DeepEqual(s.Tags, other.Tags) ||
- s.Address != other.Address ||
- s.Port != other.Port ||
- s.EnableTagOverride != other.EnableTagOverride {
- return false
- }
-
- return true
-}
-
-// ToServiceNode converts the given node service to a service node.
-func (s *NodeService) ToServiceNode(node string) *ServiceNode {
- return &ServiceNode{
- // Skip ID, see ServiceNode definition.
- Node: node,
- // Skip Address, see ServiceNode definition.
- // Skip TaggedAddresses, see ServiceNode definition.
- ServiceID: s.ID,
- ServiceName: s.Service,
- ServiceTags: s.Tags,
- ServiceAddress: s.Address,
- ServicePort: s.Port,
- ServiceEnableTagOverride: s.EnableTagOverride,
- RaftIndex: RaftIndex{
- CreateIndex: s.CreateIndex,
- ModifyIndex: s.ModifyIndex,
- },
- }
-}
-
-type NodeServices struct {
- Node *Node
- Services map[string]*NodeService
-}
-
-// HealthCheck represents a single check on a given node
-type HealthCheck struct {
- Node string
- CheckID types.CheckID // Unique per-node ID
- Name string // Check name
- Status string // The current check status
- Notes string // Additional notes with the status
- Output string // Holds output of script runs
- ServiceID string // optional associated service
- ServiceName string // optional service name
-
- RaftIndex
-}
-
-// IsSame checks if one HealthCheck is the same as another, without looking
-// at the Raft information (that's why we didn't call it IsEqual). This is
-// useful for seeing if an update would be idempotent for all the functional
-// parts of the structure.
-func (c *HealthCheck) IsSame(other *HealthCheck) bool {
- if c.Node != other.Node ||
- c.CheckID != other.CheckID ||
- c.Name != other.Name ||
- c.Status != other.Status ||
- c.Notes != other.Notes ||
- c.Output != other.Output ||
- c.ServiceID != other.ServiceID ||
- c.ServiceName != other.ServiceName {
- return false
- }
-
- return true
-}
-
-// Clone returns a distinct clone of the HealthCheck.
-func (c *HealthCheck) Clone() *HealthCheck {
- clone := new(HealthCheck)
- *clone = *c
- return clone
-}
-
-// HealthChecks is a collection of HealthCheck structs.
-type HealthChecks []*HealthCheck
-
-// CheckServiceNode is used to provide the node, its service
-// definition, as well as a HealthCheck that is associated.
-type CheckServiceNode struct {
- Node *Node
- Service *NodeService
- Checks HealthChecks
-}
-type CheckServiceNodes []CheckServiceNode
-
-// Shuffle does an in-place random shuffle using the Fisher-Yates algorithm.
-func (nodes CheckServiceNodes) Shuffle() {
- for i := len(nodes) - 1; i > 0; i-- {
- j := rand.Int31n(int32(i + 1))
- nodes[i], nodes[j] = nodes[j], nodes[i]
- }
-}
-
-// Filter removes nodes that are failing health checks (and any non-passing
-// check if that option is selected). Note that this returns the filtered
-// results AND modifies the receiver for performance.
-func (nodes CheckServiceNodes) Filter(onlyPassing bool) CheckServiceNodes {
- n := len(nodes)
-OUTER:
- for i := 0; i < n; i++ {
- node := nodes[i]
- for _, check := range node.Checks {
- if check.Status == HealthCritical ||
- (onlyPassing && check.Status != HealthPassing) {
- nodes[i], nodes[n-1] = nodes[n-1], CheckServiceNode{}
- n--
- i--
- continue OUTER
- }
- }
- }
- return nodes[:n]
-}
-
-// NodeInfo is used to dump all associated information about
-// a node. This is currently used for the UI only, as it is
-// rather expensive to generate.
-type NodeInfo struct {
- ID types.NodeID
- Node string
- Address string
- TaggedAddresses map[string]string
- Meta map[string]string
- Services []*NodeService
- Checks HealthChecks
-}
-
-// NodeDump is used to dump all the nodes with all their
-// associated data. This is currently used for the UI only,
-// as it is rather expensive to generate.
-type NodeDump []*NodeInfo
-
-type IndexedNodes struct {
- Nodes Nodes
- QueryMeta
-}
-
-type IndexedServices struct {
- Services Services
- QueryMeta
-}
-
-type IndexedServiceNodes struct {
- ServiceNodes ServiceNodes
- QueryMeta
-}
-
-type IndexedNodeServices struct {
- NodeServices *NodeServices
- QueryMeta
-}
-
-type IndexedHealthChecks struct {
- HealthChecks HealthChecks
- QueryMeta
-}
-
-type IndexedCheckServiceNodes struct {
- Nodes CheckServiceNodes
- QueryMeta
-}
-
-type IndexedNodeDump struct {
- Dump NodeDump
- QueryMeta
-}
-
-// DirEntry is used to represent a directory entry. This is
-// used for values in our Key-Value store.
-type DirEntry struct {
- LockIndex uint64
- Key string
- Flags uint64
- Value []byte
- Session string `json:",omitempty"`
-
- RaftIndex
-}
-
-// Returns a clone of the given directory entry.
-func (d *DirEntry) Clone() *DirEntry {
- return &DirEntry{
- LockIndex: d.LockIndex,
- Key: d.Key,
- Flags: d.Flags,
- Value: d.Value,
- Session: d.Session,
- RaftIndex: RaftIndex{
- CreateIndex: d.CreateIndex,
- ModifyIndex: d.ModifyIndex,
- },
- }
-}
-
-type DirEntries []*DirEntry
-
-type KVSOp string
-
-const (
- KVSSet KVSOp = "set"
- KVSDelete = "delete"
- KVSDeleteCAS = "delete-cas" // Delete with check-and-set
- KVSDeleteTree = "delete-tree"
- KVSCAS = "cas" // Check-and-set
- KVSLock = "lock" // Lock a key
- KVSUnlock = "unlock" // Unlock a key
-
- // The following operations are only available inside of atomic
- // transactions via the Txn request.
- KVSGet = "get" // Read the key during the transaction.
- KVSGetTree = "get-tree" // Read all keys with the given prefix during the transaction.
- KVSCheckSession = "check-session" // Check the session holds the key.
- KVSCheckIndex = "check-index" // Check the modify index of the key.
-)
-
-// IsWrite returns true if the given operation alters the state store.
-func (op KVSOp) IsWrite() bool {
- switch op {
- case KVSGet, KVSGetTree, KVSCheckSession, KVSCheckIndex:
- return false
-
- default:
- return true
- }
-}
-
-// KVSRequest is used to operate on the Key-Value store
-type KVSRequest struct {
- Datacenter string
- Op KVSOp // Which operation are we performing
- DirEnt DirEntry // Which directory entry
- WriteRequest
-}
-
-func (r *KVSRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// KeyRequest is used to request a key, or key prefix
-type KeyRequest struct {
- Datacenter string
- Key string
- QueryOptions
-}
-
-func (r *KeyRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// KeyListRequest is used to list keys
-type KeyListRequest struct {
- Datacenter string
- Prefix string
- Seperator string
- QueryOptions
-}
-
-func (r *KeyListRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-type IndexedDirEntries struct {
- Entries DirEntries
- QueryMeta
-}
-
-type IndexedKeyList struct {
- Keys []string
- QueryMeta
-}
-
-type SessionBehavior string
-
-const (
- SessionKeysRelease SessionBehavior = "release"
- SessionKeysDelete = "delete"
-)
-
-const (
- SessionTTLMax = 24 * time.Hour
- SessionTTLMultiplier = 2
-)
-
-// Session is used to represent an open session in the KV store.
-// This issued to associate node checks with acquired locks.
-type Session struct {
- ID string
- Name string
- Node string
- Checks []types.CheckID
- LockDelay time.Duration
- Behavior SessionBehavior // What to do when session is invalidated
- TTL string
-
- RaftIndex
-}
-type Sessions []*Session
-
-type SessionOp string
-
-const (
- SessionCreate SessionOp = "create"
- SessionDestroy = "destroy"
-)
-
-// SessionRequest is used to operate on sessions
-type SessionRequest struct {
- Datacenter string
- Op SessionOp // Which operation are we performing
- Session Session // Which session
- WriteRequest
-}
-
-func (r *SessionRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// SessionSpecificRequest is used to request a session by ID
-type SessionSpecificRequest struct {
- Datacenter string
- Session string
- QueryOptions
-}
-
-func (r *SessionSpecificRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-type IndexedSessions struct {
- Sessions Sessions
- QueryMeta
-}
-
-// ACL is used to represent a token and its rules
-type ACL struct {
- ID string
- Name string
- Type string
- Rules string
-
- RaftIndex
-}
-type ACLs []*ACL
-
-type ACLOp string
-
-const (
- ACLSet ACLOp = "set"
- ACLForceSet = "force-set" // Deprecated, left to backwards compatibility
- ACLDelete = "delete"
-)
-
-// IsSame checks if one ACL is the same as another, without looking
-// at the Raft information (that's why we didn't call it IsEqual). This is
-// useful for seeing if an update would be idempotent for all the functional
-// parts of the structure.
-func (a *ACL) IsSame(other *ACL) bool {
- if a.ID != other.ID ||
- a.Name != other.Name ||
- a.Type != other.Type ||
- a.Rules != other.Rules {
- return false
- }
-
- return true
-}
-
-// ACLRequest is used to create, update or delete an ACL
-type ACLRequest struct {
- Datacenter string
- Op ACLOp
- ACL ACL
- WriteRequest
-}
-
-func (r *ACLRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// ACLRequests is a list of ACL change requests.
-type ACLRequests []*ACLRequest
-
-// ACLSpecificRequest is used to request an ACL by ID
-type ACLSpecificRequest struct {
- Datacenter string
- ACL string
- QueryOptions
-}
-
-func (r *ACLSpecificRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// ACLPolicyRequest is used to request an ACL by ID, conditionally
-// filtering on an ID
-type ACLPolicyRequest struct {
- Datacenter string
- ACL string
- ETag string
- QueryOptions
-}
-
-func (r *ACLPolicyRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-type IndexedACLs struct {
- ACLs ACLs
- QueryMeta
-}
-
-type ACLPolicy struct {
- ETag string
- Parent string
- Policy *acl.Policy
- TTL time.Duration
- QueryMeta
-}
-
-// ACLReplicationStatus provides information about the health of the ACL
-// replication system.
-type ACLReplicationStatus struct {
- Enabled bool
- Running bool
- SourceDatacenter string
- ReplicatedIndex uint64
- LastSuccess time.Time
- LastError time.Time
-}
-
-// Coordinate stores a node name with its associated network coordinate.
-type Coordinate struct {
- Node string
- Coord *coordinate.Coordinate
-}
-
-type Coordinates []*Coordinate
-
-// IndexedCoordinate is used to represent a single node's coordinate from the state
-// store.
-type IndexedCoordinate struct {
- Coord *coordinate.Coordinate
- QueryMeta
-}
-
-// IndexedCoordinates is used to represent a list of nodes and their
-// corresponding raw coordinates.
-type IndexedCoordinates struct {
- Coordinates Coordinates
- QueryMeta
-}
-
-// DatacenterMap is used to represent a list of nodes with their raw coordinates,
-// associated with a datacenter.
-type DatacenterMap struct {
- Datacenter string
- Coordinates Coordinates
-}
-
-// CoordinateUpdateRequest is used to update the network coordinate of a given
-// node.
-type CoordinateUpdateRequest struct {
- Datacenter string
- Node string
- Coord *coordinate.Coordinate
- WriteRequest
-}
-
-// RequestDatacenter returns the datacenter for a given update request.
-func (c *CoordinateUpdateRequest) RequestDatacenter() string {
- return c.Datacenter
-}
-
-// EventFireRequest is used to ask a server to fire
-// a Serf event. It is a bit odd, since it doesn't depend on
-// the catalog or leader. Any node can respond, so it's not quite
-// like a standard write request. This is used only internally.
-type EventFireRequest struct {
- Datacenter string
- Name string
- Payload []byte
-
- // Not using WriteRequest so that any server can process
- // the request. It is a bit unusual...
- QueryOptions
-}
-
-func (r *EventFireRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// EventFireResponse is used to respond to a fire request.
-type EventFireResponse struct {
- QueryMeta
-}
-
-type TombstoneOp string
-
-const (
- TombstoneReap TombstoneOp = "reap"
-)
-
-// TombstoneRequest is used to trigger a reaping of the tombstones
-type TombstoneRequest struct {
- Datacenter string
- Op TombstoneOp
- ReapIndex uint64
- WriteRequest
-}
-
-func (r *TombstoneRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// msgpackHandle is a shared handle for encoding/decoding of structs
-var msgpackHandle = &codec.MsgpackHandle{}
-
-// Decode is used to decode a MsgPack encoded object
-func Decode(buf []byte, out interface{}) error {
- return codec.NewDecoder(bytes.NewReader(buf), msgpackHandle).Decode(out)
-}
-
-// Encode is used to encode a MsgPack object with type prefix
-func Encode(t MessageType, msg interface{}) ([]byte, error) {
- var buf bytes.Buffer
- buf.WriteByte(uint8(t))
- err := codec.NewEncoder(&buf, msgpackHandle).Encode(msg)
- return buf.Bytes(), err
-}
-
-// CompoundResponse is an interface for gathering multiple responses. It is
-// used in cross-datacenter RPC calls where more than 1 datacenter is
-// expected to reply.
-type CompoundResponse interface {
- // Add adds a new response to the compound response
- Add(interface{})
-
- // New returns an empty response object which can be passed around by
- // reference, and then passed to Add() later on.
- New() interface{}
-}
-
-type KeyringOp string
-
-const (
- KeyringList KeyringOp = "list"
- KeyringInstall = "install"
- KeyringUse = "use"
- KeyringRemove = "remove"
-)
-
-// KeyringRequest encapsulates a request to modify an encryption keyring.
-// It can be used for install, remove, or use key type operations.
-type KeyringRequest struct {
- Operation KeyringOp
- Key string
- Datacenter string
- Forwarded bool
- RelayFactor uint8
- QueryOptions
-}
-
-func (r *KeyringRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// KeyringResponse is a unified key response and can be used for install,
-// remove, use, as well as listing key queries.
-type KeyringResponse struct {
- WAN bool
- Datacenter string
- Messages map[string]string `json:",omitempty"`
- Keys map[string]int
- NumNodes int
- Error string `json:",omitempty"`
-}
-
-// KeyringResponses holds multiple responses to keyring queries. Each
-// datacenter replies independently, and KeyringResponses is used as a
-// container for the set of all responses.
-type KeyringResponses struct {
- Responses []*KeyringResponse
- QueryMeta
-}
-
-func (r *KeyringResponses) Add(v interface{}) {
- val := v.(*KeyringResponses)
- r.Responses = append(r.Responses, val.Responses...)
-}
-
-func (r *KeyringResponses) New() interface{} {
- return new(KeyringResponses)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/txn.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/txn.go
deleted file mode 100644
index 3f8035b9..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/consul/structs/txn.go
+++ /dev/null
@@ -1,85 +0,0 @@
-package structs
-
-import (
- "fmt"
-)
-
-// TxnKVOp is used to define a single operation on the KVS inside a
-// transaction
-type TxnKVOp struct {
- Verb KVSOp
- DirEnt DirEntry
-}
-
-// TxnKVResult is used to define the result of a single operation on the KVS
-// inside a transaction.
-type TxnKVResult *DirEntry
-
-// TxnOp is used to define a single operation inside a transaction. Only one
-// of the types should be filled out per entry.
-type TxnOp struct {
- KV *TxnKVOp
-}
-
-// TxnOps is a list of operations within a transaction.
-type TxnOps []*TxnOp
-
-// TxnRequest is used to apply multiple operations to the state store in a
-// single transaction
-type TxnRequest struct {
- Datacenter string
- Ops TxnOps
- WriteRequest
-}
-
-func (r *TxnRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// TxnReadRequest is used as a fast path for read-only transactions that don't
-// modify the state store.
-type TxnReadRequest struct {
- Datacenter string
- Ops TxnOps
- QueryOptions
-}
-
-func (r *TxnReadRequest) RequestDatacenter() string {
- return r.Datacenter
-}
-
-// TxnError is used to return information about an error for a specific
-// operation.
-type TxnError struct {
- OpIndex int
- What string
-}
-
-// Error returns the string representation of an atomic error.
-func (e TxnError) Error() string {
- return fmt.Sprintf("op %d: %s", e.OpIndex, e.What)
-}
-
-// TxnErrors is a list of TxnError entries.
-type TxnErrors []*TxnError
-
-// TxnResult is used to define the result of a given operation inside a
-// transaction. Only one of the types should be filled out per entry.
-type TxnResult struct {
- KV TxnKVResult
-}
-
-// TxnResults is a list of TxnResult entries.
-type TxnResults []*TxnResult
-
-// TxnResponse is the structure returned by a TxnRequest.
-type TxnResponse struct {
- Results TxnResults
- Errors TxnErrors
-}
-
-// TxnReadResponse is the structure returned by a TxnReadRequest.
-type TxnReadResponse struct {
- TxnResponse
- QueryMeta
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/README.md
index 21eb01d2..bd84f822 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/README.md
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/README.md
@@ -25,41 +25,54 @@ import (
"github.com/hashicorp/consul/testutil"
)
-func TestMain(t *testing.T) {
+func TestFoo_bar(t *testing.T) {
// Create a test Consul server
- srv1 := testutil.NewTestServer(t)
+ srv1, err := testutil.NewTestServer()
+ if err != nil {
+ t.Fatal(err)
+ }
defer srv1.Stop()
// Create a secondary server, passing in configuration
// to avoid bootstrapping as we are forming a cluster.
- srv2 := testutil.NewTestServerConfig(t, func(c *testutil.TestServerConfig) {
+ srv2, err := testutil.NewTestServerConfig(t, func(c *testutil.TestServerConfig) {
c.Bootstrap = false
})
+ if err != nil {
+ t.Fatal(err)
+ }
defer srv2.Stop()
// Join the servers together
- srv1.JoinLAN(srv2.LANAddr)
+ srv1.JoinLAN(t, srv2.LANAddr)
// Create a test key/value pair
- srv1.SetKV("foo", []byte("bar"))
+ srv1.SetKV(t, "foo", []byte("bar"))
// Create lots of test key/value pairs
- srv1.PopulateKV(map[string][]byte{
+ srv1.PopulateKV(t, map[string][]byte{
"bar": []byte("123"),
"baz": []byte("456"),
})
// Create a service
- srv1.AddService("redis", structs.HealthPassing, []string{"master"})
+ srv1.AddService(t, "redis", structs.HealthPassing, []string{"master"})
+
+ // Create a service that will be accessed in target source code
+ srv1.AddAccessibleService("redis", structs.HealthPassing, "127.0.0.1", 6379, []string{"master"})
// Create a service check
- srv1.AddCheck("service:redis", "redis", structs.HealthPassing)
+ srv1.AddCheck(t, "service:redis", "redis", structs.HealthPassing)
// Create a node check
- srv1.AddCheck("mem", "", structs.HealthCritical)
+ srv1.AddCheck(t, "mem", "", structs.HealthCritical)
// The HTTPAddr field contains the address of the Consul
// API on the new test server instance.
println(srv1.HTTPAddr)
+
+ // All functions also have a wrapper method to limit the passing of "t"
+ wrap := srv1.Wrap(t)
+ wrap.SetKV("foo", []byte("bar"))
}
```
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/io.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/io.go
new file mode 100644
index 00000000..7d0ca6ef
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/io.go
@@ -0,0 +1,61 @@
+package testutil
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "strings"
+ "testing"
+)
+
+// tmpdir is the base directory for all temporary directories
+// and files created with TempDir and TempFile. This could be
+// achieved by setting a system environment variable but then
+// the test execution would depend on whether or not the
+// environment variable is set.
+//
+// On macOS the temp base directory is quite long and that
+// triggers a problem with some tests that bind to UNIX sockets
+// where the filename seems to be too long. Using a shorter name
+// fixes this and makes the paths more readable.
+//
+// It also provides a single base directory for cleanup.
+var tmpdir = "/tmp/consul-test"
+
+func init() {
+ if err := os.MkdirAll(tmpdir, 0755); err != nil {
+ fmt.Printf("Cannot create %s. Reverting to /tmp\n", tmpdir)
+ tmpdir = "/tmp"
+ }
+}
+
+// TempDir creates a temporary directory within tmpdir
+// with the name 'testname-name'. If the directory cannot
+// be created t.Fatal is called.
+func TempDir(t *testing.T, name string) string {
+ if t != nil && t.Name() != "" {
+ name = t.Name() + "-" + name
+ }
+ name = strings.Replace(name, "/", "_", -1)
+ d, err := ioutil.TempDir(tmpdir, name)
+ if err != nil {
+ t.Fatalf("err: %s", err)
+ }
+ return d
+}
+
+// TempFile creates a temporary file within tmpdir
+// with the name 'testname-name'. If the file cannot
+// be created t.Fatal is called. If a temporary directory
+// has been created before consider storing the file
+// inside this directory to avoid double cleanup.
+func TempFile(t *testing.T, name string) *os.File {
+ if t != nil && t.Name() != "" {
+ name = t.Name() + "-" + name
+ }
+ f, err := ioutil.TempFile(tmpdir, name)
+ if err != nil {
+ t.Fatalf("err: %s", err)
+ }
+ return f
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/retry/retry.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/retry/retry.go
new file mode 100644
index 00000000..cfbdde3c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/retry/retry.go
@@ -0,0 +1,197 @@
+// Package retry provides support for repeating operations in tests.
+//
+// A sample retry operation looks like this:
+//
+// func TestX(t *testing.T) {
+// retry.Run(t, func(r *retry.R) {
+// if err := foo(); err != nil {
+// r.Fatal("f: ", err)
+// }
+// })
+// }
+//
+package retry
+
+import (
+ "bytes"
+ "fmt"
+ "runtime"
+ "strings"
+ "sync"
+ "time"
+)
+
+// Failer is an interface compatible with testing.T.
+type Failer interface {
+ // Log is called for the final test output
+ Log(args ...interface{})
+
+ // FailNow is called when the retrying is abandoned.
+ FailNow()
+}
+
+// R provides context for the retryer.
+type R struct {
+ fail bool
+ output []string
+}
+
+func (r *R) FailNow() {
+ r.fail = true
+ runtime.Goexit()
+}
+
+func (r *R) Fatal(args ...interface{}) {
+ r.log(fmt.Sprint(args...))
+ r.FailNow()
+}
+
+func (r *R) Fatalf(format string, args ...interface{}) {
+ r.log(fmt.Sprintf(format, args...))
+ r.FailNow()
+}
+
+func (r *R) Error(args ...interface{}) {
+ r.log(fmt.Sprint(args...))
+ r.fail = true
+}
+
+func (r *R) Check(err error) {
+ if err != nil {
+ r.log(err.Error())
+ r.FailNow()
+ }
+}
+
+func (r *R) log(s string) {
+ r.output = append(r.output, decorate(s))
+}
+
+func decorate(s string) string {
+ _, file, line, ok := runtime.Caller(3)
+ if ok {
+ n := strings.LastIndex(file, "/")
+ if n >= 0 {
+ file = file[n+1:]
+ }
+ } else {
+ file = "???"
+ line = 1
+ }
+ return fmt.Sprintf("%s:%d: %s", file, line, s)
+}
+
+func Run(t Failer, f func(r *R)) {
+ run(TwoSeconds(), t, f)
+}
+
+func RunWith(r Retryer, t Failer, f func(r *R)) {
+ run(r, t, f)
+}
+
+func dedup(a []string) string {
+ if len(a) == 0 {
+ return ""
+ }
+ m := map[string]int{}
+ for _, s := range a {
+ m[s] = m[s] + 1
+ }
+ var b bytes.Buffer
+ for _, s := range a {
+ if _, ok := m[s]; ok {
+ b.WriteString(s)
+ b.WriteRune('\n')
+ delete(m, s)
+ }
+ }
+ return string(b.Bytes())
+}
+
+func run(r Retryer, t Failer, f func(r *R)) {
+ rr := &R{}
+ fail := func() {
+ out := dedup(rr.output)
+ if out != "" {
+ t.Log(out)
+ }
+ t.FailNow()
+ }
+ for r.NextOr(fail) {
+ var wg sync.WaitGroup
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ f(rr)
+ }()
+ wg.Wait()
+ if rr.fail {
+ rr.fail = false
+ continue
+ }
+ break
+ }
+}
+
+// TwoSeconds repeats an operation for two seconds and waits 25ms in between.
+func TwoSeconds() *Timer {
+ return &Timer{Timeout: 2 * time.Second, Wait: 25 * time.Millisecond}
+}
+
+// ThreeTimes repeats an operation three times and waits 25ms in between.
+func ThreeTimes() *Counter {
+ return &Counter{Count: 3, Wait: 25 * time.Millisecond}
+}
+
+// Retryer provides an interface for repeating operations
+// until they succeed or an exit condition is met.
+type Retryer interface {
+ // NextOr returns true if the operation should be repeated.
+ // Otherwise, it calls fail and returns false.
+ NextOr(fail func()) bool
+}
+
+// Counter repeats an operation a given number of
+// times and waits between subsequent operations.
+type Counter struct {
+ Count int
+ Wait time.Duration
+
+ count int
+}
+
+func (r *Counter) NextOr(fail func()) bool {
+ if r.count == r.Count {
+ fail()
+ return false
+ }
+ if r.count > 0 {
+ time.Sleep(r.Wait)
+ }
+ r.count++
+ return true
+}
+
+// Timer repeats an operation for a given amount
+// of time and waits between subsequent operations.
+type Timer struct {
+ Timeout time.Duration
+ Wait time.Duration
+
+ // stop is the timeout deadline.
+ // Set on the first invocation of Next().
+ stop time.Time
+}
+
+func (r *Timer) NextOr(fail func()) bool {
+ if r.stop.IsZero() {
+ r.stop = time.Now().Add(r.Timeout)
+ return true
+ }
+ if time.Now().After(r.stop) {
+ fail()
+ return false
+ }
+ time.Sleep(r.Wait)
+ return true
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server.go
index 7daa21ed..969d06a5 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server.go
@@ -12,8 +12,7 @@ package testutil
// otherwise cause an import cycle.
import (
- "bytes"
- "encoding/base64"
+ "context"
"encoding/json"
"fmt"
"io"
@@ -22,11 +21,16 @@ import (
"net/http"
"os"
"os/exec"
+ "path/filepath"
"strconv"
"strings"
+ "testing"
+ "time"
- "github.com/hashicorp/consul/consul/structs"
+ "github.com/hashicorp/consul/testutil/retry"
"github.com/hashicorp/go-cleanhttp"
+ "github.com/hashicorp/go-uuid"
+ "github.com/pkg/errors"
)
// TestPerformanceConfig configures the performance parameters.
@@ -39,10 +43,13 @@ type TestPerformanceConfig struct {
type TestPortConfig struct {
DNS int `json:"dns,omitempty"`
HTTP int `json:"http,omitempty"`
- RPC int `json:"rpc,omitempty"`
+ HTTPS int `json:"https,omitempty"`
SerfLan int `json:"serf_lan,omitempty"`
SerfWan int `json:"serf_wan,omitempty"`
Server int `json:"server,omitempty"`
+
+ // Deprecated
+ RPC int `json:"rpc,omitempty"`
}
// TestAddressConfig contains the bind addresses for various
@@ -53,24 +60,36 @@ type TestAddressConfig struct {
// TestServerConfig is the main server configuration struct.
type TestServerConfig struct {
- NodeName string `json:"node_name"`
- NodeMeta map[string]string `json:"node_meta,omitempty"`
- Performance *TestPerformanceConfig `json:"performance,omitempty"`
- Bootstrap bool `json:"bootstrap,omitempty"`
- Server bool `json:"server,omitempty"`
- DataDir string `json:"data_dir,omitempty"`
- Datacenter string `json:"datacenter,omitempty"`
- DisableCheckpoint bool `json:"disable_update_check"`
- LogLevel string `json:"log_level,omitempty"`
- Bind string `json:"bind_addr,omitempty"`
- Addresses *TestAddressConfig `json:"addresses,omitempty"`
- Ports *TestPortConfig `json:"ports,omitempty"`
- ACLMasterToken string `json:"acl_master_token,omitempty"`
- ACLDatacenter string `json:"acl_datacenter,omitempty"`
- ACLDefaultPolicy string `json:"acl_default_policy,omitempty"`
- Encrypt string `json:"encrypt,omitempty"`
- Stdout, Stderr io.Writer `json:"-"`
- Args []string `json:"-"`
+ NodeName string `json:"node_name"`
+ NodeID string `json:"node_id"`
+ NodeMeta map[string]string `json:"node_meta,omitempty"`
+ Performance *TestPerformanceConfig `json:"performance,omitempty"`
+ Bootstrap bool `json:"bootstrap,omitempty"`
+ Server bool `json:"server,omitempty"`
+ DataDir string `json:"data_dir,omitempty"`
+ Datacenter string `json:"datacenter,omitempty"`
+ DisableCheckpoint bool `json:"disable_update_check"`
+ LogLevel string `json:"log_level,omitempty"`
+ Bind string `json:"bind_addr,omitempty"`
+ Addresses *TestAddressConfig `json:"addresses,omitempty"`
+ Ports *TestPortConfig `json:"ports,omitempty"`
+ RaftProtocol int `json:"raft_protocol,omitempty"`
+ ACLMasterToken string `json:"acl_master_token,omitempty"`
+ ACLDatacenter string `json:"acl_datacenter,omitempty"`
+ ACLDefaultPolicy string `json:"acl_default_policy,omitempty"`
+ ACLEnforceVersion8 bool `json:"acl_enforce_version_8"`
+ Encrypt string `json:"encrypt,omitempty"`
+ CAFile string `json:"ca_file,omitempty"`
+ CertFile string `json:"cert_file,omitempty"`
+ KeyFile string `json:"key_file,omitempty"`
+ VerifyIncoming bool `json:"verify_incoming,omitempty"`
+ VerifyIncomingRPC bool `json:"verify_incoming_rpc,omitempty"`
+ VerifyIncomingHTTPS bool `json:"verify_incoming_https,omitempty"`
+ VerifyOutgoing bool `json:"verify_outgoing,omitempty"`
+ EnableScriptChecks bool `json:"enable_script_checks,omitempty"`
+ ReadyTimeout time.Duration `json:"-"`
+ Stdout, Stderr io.Writer `json:"-"`
+ Args []string `json:"-"`
}
// ServerConfigCallback is a function interface which can be
@@ -80,8 +99,14 @@ type ServerConfigCallback func(c *TestServerConfig)
// defaultServerConfig returns a new TestServerConfig struct
// with all of the listen ports incremented by one.
func defaultServerConfig() *TestServerConfig {
+ nodeID, err := uuid.GenerateUUID()
+ if err != nil {
+ panic(err)
+ }
+
return &TestServerConfig{
NodeName: fmt.Sprintf("node%d", randomPort()),
+ NodeID: nodeID,
DisableCheckpoint: true,
Performance: &TestPerformanceConfig{
RaftMultiplier: 1,
@@ -94,11 +119,13 @@ func defaultServerConfig() *TestServerConfig {
Ports: &TestPortConfig{
DNS: randomPort(),
HTTP: randomPort(),
- RPC: randomPort(),
+ HTTPS: randomPort(),
SerfLan: randomPort(),
SerfWan: randomPort(),
Server: randomPort(),
+ RPC: randomPort(),
},
+ ReadyTimeout: 10 * time.Second,
}
}
@@ -129,15 +156,6 @@ type TestCheck struct {
TTL string `json:",omitempty"`
}
-// TestingT is an interface wrapper around TestingT
-type TestingT interface {
- Logf(format string, args ...interface{})
- Errorf(format string, args ...interface{})
- Fatalf(format string, args ...interface{})
- Fatal(args ...interface{})
- Skip(args ...interface{})
-}
-
// TestKVResponse is what we use to decode KV data.
type TestKVResponse struct {
Value string
@@ -147,382 +165,229 @@ type TestKVResponse struct {
type TestServer struct {
cmd *exec.Cmd
Config *TestServerConfig
- t TestingT
- HTTPAddr string
- LANAddr string
- WANAddr string
+ HTTPAddr string
+ HTTPSAddr string
+ LANAddr string
+ WANAddr string
+
+ HTTPClient *http.Client
- HttpClient *http.Client
+ tmpdir string
}
// NewTestServer is an easy helper method to create a new Consul
// test server with the most basic configuration.
-func NewTestServer(t TestingT) *TestServer {
- return NewTestServerConfig(t, nil)
+func NewTestServer() (*TestServer, error) {
+ return NewTestServerConfigT(nil, nil)
}
-// NewTestServerConfig creates a new TestServer, and makes a call to
-// an optional callback function to modify the configuration.
-func NewTestServerConfig(t TestingT, cb ServerConfigCallback) *TestServer {
- if path, err := exec.LookPath("consul"); err != nil || path == "" {
- t.Fatal("consul not found on $PATH - download and install " +
- "consul or skip this test")
- }
+func NewTestServerConfig(cb ServerConfigCallback) (*TestServer, error) {
+ return NewTestServerConfigT(nil, cb)
+}
- dataDir, err := ioutil.TempDir("", "consul")
- if err != nil {
- t.Fatalf("err: %s", err)
- }
+// NewTestServerConfig creates a new TestServer, and makes a call to an optional
+// callback function to modify the configuration. If there is an error
+// configuring or starting the server, the server will NOT be running when the
+// function returns (thus you do not need to stop it).
+func NewTestServerConfigT(t *testing.T, cb ServerConfigCallback) (*TestServer, error) {
+ var server *TestServer
+ retry.Run(t, func(r *retry.R) {
+ var err error
+ server, err = newTestServerConfigT(t, cb)
+ if err != nil {
+ r.Fatalf("failed starting test server: %v", err)
+ }
+ })
+ return server, nil
+}
- configFile, err := ioutil.TempFile(dataDir, "config")
- if err != nil {
- defer os.RemoveAll(dataDir)
- t.Fatalf("err: %s", err)
+// newTestServerConfigT is the internal helper for NewTestServerConfigT.
+func newTestServerConfigT(t *testing.T, cb ServerConfigCallback) (*TestServer, error) {
+ path, err := exec.LookPath("consul")
+ if err != nil || path == "" {
+ return nil, fmt.Errorf("consul not found on $PATH - download and install " +
+ "consul or skip this test")
}
- consulConfig := defaultServerConfig()
- consulConfig.DataDir = dataDir
-
+ tmpdir := TempDir(t, "consul")
+ cfg := defaultServerConfig()
+ cfg.DataDir = filepath.Join(tmpdir, "data")
if cb != nil {
- cb(consulConfig)
+ cb(cfg)
}
- configContent, err := json.Marshal(consulConfig)
+ b, err := json.Marshal(cfg)
if err != nil {
- t.Fatalf("err: %s", err)
+ return nil, errors.Wrap(err, "failed marshaling json")
}
- if _, err := configFile.Write(configContent); err != nil {
- t.Fatalf("err: %s", err)
+ configFile := filepath.Join(tmpdir, "config.json")
+ if err := ioutil.WriteFile(configFile, b, 0644); err != nil {
+ defer os.RemoveAll(tmpdir)
+ return nil, errors.Wrap(err, "failed writing config content")
}
- configFile.Close()
stdout := io.Writer(os.Stdout)
- if consulConfig.Stdout != nil {
- stdout = consulConfig.Stdout
+ if cfg.Stdout != nil {
+ stdout = cfg.Stdout
}
-
stderr := io.Writer(os.Stderr)
- if consulConfig.Stderr != nil {
- stderr = consulConfig.Stderr
+ if cfg.Stderr != nil {
+ stderr = cfg.Stderr
}
// Start the server
- args := []string{"agent", "-config-file", configFile.Name()}
- args = append(args, consulConfig.Args...)
+ args := []string{"agent", "-config-file", configFile}
+ args = append(args, cfg.Args...)
cmd := exec.Command("consul", args...)
cmd.Stdout = stdout
cmd.Stderr = stderr
if err := cmd.Start(); err != nil {
- t.Fatalf("err: %s", err)
+ return nil, errors.Wrap(err, "failed starting command")
}
- var httpAddr string
- var client *http.Client
- if strings.HasPrefix(consulConfig.Addresses.HTTP, "unix://") {
- httpAddr = consulConfig.Addresses.HTTP
- trans := cleanhttp.DefaultTransport()
- trans.Dial = func(_, _ string) (net.Conn, error) {
- return net.Dial("unix", httpAddr[7:])
- }
- client = &http.Client{
- Transport: trans,
+ httpAddr := fmt.Sprintf("127.0.0.1:%d", cfg.Ports.HTTP)
+ client := cleanhttp.DefaultClient()
+ if strings.HasPrefix(cfg.Addresses.HTTP, "unix://") {
+ httpAddr = cfg.Addresses.HTTP
+ tr := cleanhttp.DefaultTransport()
+ tr.DialContext = func(_ context.Context, _, _ string) (net.Conn, error) {
+ return net.Dial("unix", httpAddr[len("unix://"):])
}
- } else {
- httpAddr = fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.HTTP)
- client = cleanhttp.DefaultClient()
+ client = &http.Client{Transport: tr}
}
server := &TestServer{
- Config: consulConfig,
+ Config: cfg,
cmd: cmd,
- t: t,
- HTTPAddr: httpAddr,
- LANAddr: fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.SerfLan),
- WANAddr: fmt.Sprintf("127.0.0.1:%d", consulConfig.Ports.SerfWan),
+ HTTPAddr: httpAddr,
+ HTTPSAddr: fmt.Sprintf("127.0.0.1:%d", cfg.Ports.HTTPS),
+ LANAddr: fmt.Sprintf("127.0.0.1:%d", cfg.Ports.SerfLan),
+ WANAddr: fmt.Sprintf("127.0.0.1:%d", cfg.Ports.SerfWan),
- HttpClient: client,
+ HTTPClient: client,
+
+ tmpdir: tmpdir,
}
// Wait for the server to be ready
- if consulConfig.Bootstrap {
- server.waitForLeader()
+ if cfg.Bootstrap {
+ err = server.waitForLeader()
} else {
- server.waitForAPI()
+ err = server.waitForAPI()
}
-
- return server
+ if err != nil {
+ defer server.Stop()
+ return nil, errors.Wrap(err, "failed waiting for server to start")
+ }
+ return server, nil
}
// Stop stops the test Consul server, and removes the Consul data
// directory once we are done.
-func (s *TestServer) Stop() {
- defer os.RemoveAll(s.Config.DataDir)
+func (s *TestServer) Stop() error {
+ defer os.RemoveAll(s.tmpdir)
+
+ // There was no process
+ if s.cmd == nil {
+ return nil
+ }
- if err := s.cmd.Process.Kill(); err != nil {
- s.t.Errorf("err: %s", err)
+ if s.cmd.Process != nil {
+ if err := s.cmd.Process.Signal(os.Interrupt); err != nil {
+ return errors.Wrap(err, "failed to kill consul server")
+ }
}
// wait for the process to exit to be sure that the data dir can be
// deleted on all platforms.
- s.cmd.Wait()
+ return s.cmd.Wait()
+}
+
+type failer struct {
+ failed bool
}
+func (f *failer) Log(args ...interface{}) { fmt.Println(args) }
+func (f *failer) FailNow() { f.failed = true }
+
// waitForAPI waits for only the agent HTTP endpoint to start
// responding. This is an indication that the agent has started,
// but will likely return before a leader is elected.
-func (s *TestServer) waitForAPI() {
- WaitForResult(func() (bool, error) {
- resp, err := s.HttpClient.Get(s.url("/v1/agent/self"))
+func (s *TestServer) waitForAPI() error {
+ f := &failer{}
+ retry.Run(f, func(r *retry.R) {
+ resp, err := s.HTTPClient.Get(s.url("/v1/agent/self"))
if err != nil {
- return false, err
+ r.Fatal(err)
}
defer resp.Body.Close()
if err := s.requireOK(resp); err != nil {
- return false, err
+ r.Fatal("failed OK respose", err)
}
- return true, nil
- }, func(err error) {
- defer s.Stop()
- s.t.Fatalf("err: %s", err)
})
+ if f.failed {
+ return errors.New("failed waiting for API")
+ }
+ return nil
}
// waitForLeader waits for the Consul server's HTTP API to become
// available, and then waits for a known leader and an index of
// 1 or more to be observed to confirm leader election is done.
// It then waits to ensure the anti-entropy sync has completed.
-func (s *TestServer) waitForLeader() {
+func (s *TestServer) waitForLeader() error {
+ f := &failer{}
+ timer := &retry.Timer{
+ Timeout: s.Config.ReadyTimeout,
+ Wait: 250 * time.Millisecond,
+ }
var index int64
- WaitForResult(func() (bool, error) {
+ retry.RunWith(timer, f, func(r *retry.R) {
// Query the API and check the status code.
- url := s.url(fmt.Sprintf("/v1/catalog/nodes?index=%d&wait=2s", index))
- resp, err := s.HttpClient.Get(url)
+ url := s.url(fmt.Sprintf("/v1/catalog/nodes?index=%d", index))
+ resp, err := s.HTTPClient.Get(url)
if err != nil {
- return false, err
+ r.Fatal("failed http get", err)
}
defer resp.Body.Close()
if err := s.requireOK(resp); err != nil {
- return false, err
+ r.Fatal("failed OK response", err)
}
// Ensure we have a leader and a node registration.
if leader := resp.Header.Get("X-Consul-KnownLeader"); leader != "true" {
- return false, fmt.Errorf("Consul leader status: %#v", leader)
+ r.Fatalf("Consul leader status: %#v", leader)
}
index, err = strconv.ParseInt(resp.Header.Get("X-Consul-Index"), 10, 64)
if err != nil {
- return false, fmt.Errorf("Consul index was bad: %v", err)
+ r.Fatal("bad consul index", err)
}
if index == 0 {
- return false, fmt.Errorf("Consul index is 0")
+ r.Fatal("consul index is 0")
}
// Watch for the anti-entropy sync to finish.
- var parsed []map[string]interface{}
+ var v []map[string]interface{}
dec := json.NewDecoder(resp.Body)
- if err := dec.Decode(&parsed); err != nil {
- return false, err
+ if err := dec.Decode(&v); err != nil {
+ r.Fatal(err)
}
- if len(parsed) < 1 {
- return false, fmt.Errorf("No nodes")
+ if len(v) < 1 {
+ r.Fatal("No nodes")
}
- taggedAddresses, ok := parsed[0]["TaggedAddresses"].(map[string]interface{})
+ taggedAddresses, ok := v[0]["TaggedAddresses"].(map[string]interface{})
if !ok {
- return false, fmt.Errorf("Missing tagged addresses")
+ r.Fatal("Missing tagged addresses")
}
if _, ok := taggedAddresses["lan"]; !ok {
- return false, fmt.Errorf("No lan tagged addresses")
+ r.Fatal("No lan tagged addresses")
}
- return true, nil
- }, func(err error) {
- defer s.Stop()
- s.t.Fatalf("err: %s", err)
})
-}
-
-// url is a helper function which takes a relative URL and
-// makes it into a proper URL against the local Consul server.
-func (s *TestServer) url(path string) string {
- return fmt.Sprintf("http://127.0.0.1:%d%s", s.Config.Ports.HTTP, path)
-}
-
-// requireOK checks the HTTP response code and ensures it is acceptable.
-func (s *TestServer) requireOK(resp *http.Response) error {
- if resp.StatusCode != 200 {
- return fmt.Errorf("Bad status code: %d", resp.StatusCode)
+ if f.failed {
+ return errors.New("failed waiting for leader")
}
return nil
}
-
-// put performs a new HTTP PUT request.
-func (s *TestServer) put(path string, body io.Reader) *http.Response {
- req, err := http.NewRequest("PUT", s.url(path), body)
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
- resp, err := s.HttpClient.Do(req)
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
- if err := s.requireOK(resp); err != nil {
- defer resp.Body.Close()
- s.t.Fatal(err)
- }
- return resp
-}
-
-// get performs a new HTTP GET request.
-func (s *TestServer) get(path string) *http.Response {
- resp, err := s.HttpClient.Get(s.url(path))
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
- if err := s.requireOK(resp); err != nil {
- defer resp.Body.Close()
- s.t.Fatal(err)
- }
- return resp
-}
-
-// encodePayload returns a new io.Reader wrapping the encoded contents
-// of the payload, suitable for passing directly to a new request.
-func (s *TestServer) encodePayload(payload interface{}) io.Reader {
- var encoded bytes.Buffer
- enc := json.NewEncoder(&encoded)
- if err := enc.Encode(payload); err != nil {
- s.t.Fatalf("err: %s", err)
- }
- return &encoded
-}
-
-// JoinLAN is used to join nodes within the same datacenter.
-func (s *TestServer) JoinLAN(addr string) {
- resp := s.get("/v1/agent/join/" + addr)
- resp.Body.Close()
-}
-
-// JoinWAN is used to join remote datacenters together.
-func (s *TestServer) JoinWAN(addr string) {
- resp := s.get("/v1/agent/join/" + addr + "?wan=1")
- resp.Body.Close()
-}
-
-// SetKV sets an individual key in the K/V store.
-func (s *TestServer) SetKV(key string, val []byte) {
- resp := s.put("/v1/kv/"+key, bytes.NewBuffer(val))
- resp.Body.Close()
-}
-
-// GetKV retrieves a single key and returns its value
-func (s *TestServer) GetKV(key string) []byte {
- resp := s.get("/v1/kv/" + key)
- defer resp.Body.Close()
-
- raw, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
-
- var result []*TestKVResponse
- if err := json.Unmarshal(raw, &result); err != nil {
- s.t.Fatalf("err: %s", err)
- }
- if len(result) < 1 {
- s.t.Fatalf("key does not exist: %s", key)
- }
-
- v, err := base64.StdEncoding.DecodeString(result[0].Value)
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
-
- return v
-}
-
-// PopulateKV fills the Consul KV with data from a generic map.
-func (s *TestServer) PopulateKV(data map[string][]byte) {
- for k, v := range data {
- s.SetKV(k, v)
- }
-}
-
-// ListKV returns a list of keys present in the KV store. This will list all
-// keys under the given prefix recursively and return them as a slice.
-func (s *TestServer) ListKV(prefix string) []string {
- resp := s.get("/v1/kv/" + prefix + "?keys")
- defer resp.Body.Close()
-
- raw, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- s.t.Fatalf("err: %s", err)
- }
-
- var result []string
- if err := json.Unmarshal(raw, &result); err != nil {
- s.t.Fatalf("err: %s", err)
- }
- return result
-}
-
-// AddService adds a new service to the Consul instance. It also
-// automatically adds a health check with the given status, which
-// can be one of "passing", "warning", or "critical".
-func (s *TestServer) AddService(name, status string, tags []string) {
- svc := &TestService{
- Name: name,
- Tags: tags,
- }
- payload := s.encodePayload(svc)
- s.put("/v1/agent/service/register", payload)
-
- chkName := "service:" + name
- chk := &TestCheck{
- Name: chkName,
- ServiceID: name,
- TTL: "10m",
- }
- payload = s.encodePayload(chk)
- s.put("/v1/agent/check/register", payload)
-
- switch status {
- case structs.HealthPassing:
- s.put("/v1/agent/check/pass/"+chkName, nil)
- case structs.HealthWarning:
- s.put("/v1/agent/check/warn/"+chkName, nil)
- case structs.HealthCritical:
- s.put("/v1/agent/check/fail/"+chkName, nil)
- default:
- s.t.Fatalf("Unrecognized status: %s", status)
- }
-}
-
-// AddCheck adds a check to the Consul instance. If the serviceID is
-// left empty (""), then the check will be associated with the node.
-// The check status may be "passing", "warning", or "critical".
-func (s *TestServer) AddCheck(name, serviceID, status string) {
- chk := &TestCheck{
- ID: name,
- Name: name,
- TTL: "10m",
- }
- if serviceID != "" {
- chk.ServiceID = serviceID
- }
-
- payload := s.encodePayload(chk)
- s.put("/v1/agent/check/register", payload)
-
- switch status {
- case structs.HealthPassing:
- s.put("/v1/agent/check/pass/"+name, nil)
- case structs.HealthWarning:
- s.put("/v1/agent/check/warn/"+name, nil)
- case structs.HealthCritical:
- s.put("/v1/agent/check/fail/"+name, nil)
- default:
- s.t.Fatalf("Unrecognized status: %s", status)
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_methods.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_methods.go
new file mode 100644
index 00000000..8f4b067a
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_methods.go
@@ -0,0 +1,256 @@
+package testutil
+
+import (
+ "bytes"
+ "encoding/base64"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "testing"
+
+ "github.com/pkg/errors"
+)
+
+// copied from testutil to break circular dependency
+const (
+ HealthAny = "any"
+ HealthPassing = "passing"
+ HealthWarning = "warning"
+ HealthCritical = "critical"
+ HealthMaint = "maintenance"
+)
+
+// JoinLAN is used to join local datacenters together.
+func (s *TestServer) JoinLAN(t *testing.T, addr string) {
+ resp := s.get(t, "/v1/agent/join/"+addr)
+ defer resp.Body.Close()
+}
+
+// JoinWAN is used to join remote datacenters together.
+func (s *TestServer) JoinWAN(t *testing.T, addr string) {
+ resp := s.get(t, "/v1/agent/join/"+addr+"?wan=1")
+ resp.Body.Close()
+}
+
+// SetKV sets an individual key in the K/V store.
+func (s *TestServer) SetKV(t *testing.T, key string, val []byte) {
+ resp := s.put(t, "/v1/kv/"+key, bytes.NewBuffer(val))
+ resp.Body.Close()
+}
+
+// SetKVString sets an individual key in the K/V store, but accepts a string
+// instead of []byte.
+func (s *TestServer) SetKVString(t *testing.T, key string, val string) {
+ resp := s.put(t, "/v1/kv/"+key, bytes.NewBufferString(val))
+ resp.Body.Close()
+}
+
+// GetKV retrieves a single key and returns its value
+func (s *TestServer) GetKV(t *testing.T, key string) []byte {
+ resp := s.get(t, "/v1/kv/"+key)
+ defer resp.Body.Close()
+
+ raw, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ t.Fatalf("failed to read body: %s", err)
+ }
+
+ var result []*TestKVResponse
+ if err := json.Unmarshal(raw, &result); err != nil {
+ t.Fatalf("failed to unmarshal: %s", err)
+ }
+ if len(result) < 1 {
+ t.Fatalf("key does not exist: %s", key)
+ }
+
+ v, err := base64.StdEncoding.DecodeString(result[0].Value)
+ if err != nil {
+ t.Fatalf("failed to base64 decode: %s", err)
+ }
+
+ return v
+}
+
+// GetKVString retrieves a value from the store, but returns as a string instead
+// of []byte.
+func (s *TestServer) GetKVString(t *testing.T, key string) string {
+ return string(s.GetKV(t, key))
+}
+
+// PopulateKV fills the Consul KV with data from a generic map.
+func (s *TestServer) PopulateKV(t *testing.T, data map[string][]byte) {
+ for k, v := range data {
+ s.SetKV(t, k, v)
+ }
+}
+
+// ListKV returns a list of keys present in the KV store. This will list all
+// keys under the given prefix recursively and return them as a slice.
+func (s *TestServer) ListKV(t *testing.T, prefix string) []string {
+ resp := s.get(t, "/v1/kv/"+prefix+"?keys")
+ defer resp.Body.Close()
+
+ raw, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ t.Fatalf("failed to read body: %s", err)
+ }
+
+ var result []string
+ if err := json.Unmarshal(raw, &result); err != nil {
+ t.Fatalf("failed to unmarshal: %s", err)
+ }
+ return result
+}
+
+// AddService adds a new service to the Consul instance. It also
+// automatically adds a health check with the given status, which
+// can be one of "passing", "warning", or "critical".
+func (s *TestServer) AddService(t *testing.T, name, status string, tags []string) {
+ s.AddAddressableService(t, name, status, "", 0, tags) // set empty address and 0 as port for non-accessible service
+}
+
+// AddAddressableService adds a new service to the Consul instance by
+// passing "address" and "port". It is helpful when you need to prepare a fakeService
+// that maybe accessed with in target source code.
+// It also automatically adds a health check with the given status, which
+// can be one of "passing", "warning", or "critical", just like `AddService` does.
+func (s *TestServer) AddAddressableService(t *testing.T, name, status, address string, port int, tags []string) {
+ svc := &TestService{
+ Name: name,
+ Tags: tags,
+ Address: address,
+ Port: port,
+ }
+ payload, err := s.encodePayload(svc)
+ if err != nil {
+ t.Fatal(err)
+ }
+ s.put(t, "/v1/agent/service/register", payload)
+
+ chkName := "service:" + name
+ chk := &TestCheck{
+ Name: chkName,
+ ServiceID: name,
+ TTL: "10m",
+ }
+ payload, err = s.encodePayload(chk)
+ if err != nil {
+ t.Fatal(err)
+ }
+ s.put(t, "/v1/agent/check/register", payload)
+
+ switch status {
+ case HealthPassing:
+ s.put(t, "/v1/agent/check/pass/"+chkName, nil)
+ case HealthWarning:
+ s.put(t, "/v1/agent/check/warn/"+chkName, nil)
+ case HealthCritical:
+ s.put(t, "/v1/agent/check/fail/"+chkName, nil)
+ default:
+ t.Fatalf("Unrecognized status: %s", status)
+ }
+}
+
+// AddCheck adds a check to the Consul instance. If the serviceID is
+// left empty (""), then the check will be associated with the node.
+// The check status may be "passing", "warning", or "critical".
+func (s *TestServer) AddCheck(t *testing.T, name, serviceID, status string) {
+ chk := &TestCheck{
+ ID: name,
+ Name: name,
+ TTL: "10m",
+ }
+ if serviceID != "" {
+ chk.ServiceID = serviceID
+ }
+
+ payload, err := s.encodePayload(chk)
+ if err != nil {
+ t.Fatal(err)
+ }
+ s.put(t, "/v1/agent/check/register", payload)
+
+ switch status {
+ case HealthPassing:
+ s.put(t, "/v1/agent/check/pass/"+name, nil)
+ case HealthWarning:
+ s.put(t, "/v1/agent/check/warn/"+name, nil)
+ case HealthCritical:
+ s.put(t, "/v1/agent/check/fail/"+name, nil)
+ default:
+ t.Fatalf("Unrecognized status: %s", status)
+ }
+}
+
+// put performs a new HTTP PUT request.
+func (s *TestServer) put(t *testing.T, path string, body io.Reader) *http.Response {
+ req, err := http.NewRequest("PUT", s.url(path), body)
+ if err != nil {
+ t.Fatalf("failed to create PUT request: %s", err)
+ }
+ resp, err := s.HTTPClient.Do(req)
+ if err != nil {
+ t.Fatalf("failed to make PUT request: %s", err)
+ }
+ if err := s.requireOK(resp); err != nil {
+ defer resp.Body.Close()
+ t.Fatalf("not OK PUT: %s", err)
+ }
+ return resp
+}
+
+// get performs a new HTTP GET request.
+func (s *TestServer) get(t *testing.T, path string) *http.Response {
+ resp, err := s.HTTPClient.Get(s.url(path))
+ if err != nil {
+ t.Fatalf("failed to create GET request: %s", err)
+ }
+ if err := s.requireOK(resp); err != nil {
+ defer resp.Body.Close()
+ t.Fatalf("not OK GET: %s", err)
+ }
+ return resp
+}
+
+// encodePayload returns a new io.Reader wrapping the encoded contents
+// of the payload, suitable for passing directly to a new request.
+func (s *TestServer) encodePayload(payload interface{}) (io.Reader, error) {
+ var encoded bytes.Buffer
+ enc := json.NewEncoder(&encoded)
+ if err := enc.Encode(payload); err != nil {
+ return nil, errors.Wrap(err, "failed to encode payload")
+ }
+ return &encoded, nil
+}
+
+// url is a helper function which takes a relative URL and
+// makes it into a proper URL against the local Consul server.
+func (s *TestServer) url(path string) string {
+ if s == nil {
+ log.Fatal("s is nil")
+ }
+ if s.Config == nil {
+ log.Fatal("s.Config is nil")
+ }
+ if s.Config.Ports == nil {
+ log.Fatal("s.Config.Ports is nil")
+ }
+ if s.Config.Ports.HTTP == 0 {
+ log.Fatal("s.Config.Ports.HTTP is 0")
+ }
+ if path == "" {
+ log.Fatal("path is empty")
+ }
+ return fmt.Sprintf("http://127.0.0.1:%d%s", s.Config.Ports.HTTP, path)
+}
+
+// requireOK checks the HTTP response code and ensures it is acceptable.
+func (s *TestServer) requireOK(resp *http.Response) error {
+ if resp.StatusCode != 200 {
+ return fmt.Errorf("Bad status code: %d", resp.StatusCode)
+ }
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_wrapper.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_wrapper.go
new file mode 100644
index 00000000..17615da8
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/server_wrapper.go
@@ -0,0 +1,65 @@
+package testutil
+
+import "testing"
+
+type WrappedServer struct {
+ s *TestServer
+ t *testing.T
+}
+
+// Wrap wraps the test server in a `testing.t` for convenience.
+//
+// For example, the following code snippets are equivalent.
+//
+// server.JoinLAN(t, "1.2.3.4")
+// server.Wrap(t).JoinLAN("1.2.3.4")
+//
+// This is useful when you are calling multiple functions and save the wrapped
+// value as another variable to reduce the inclusion of "t".
+func (s *TestServer) Wrap(t *testing.T) *WrappedServer {
+ return &WrappedServer{s, t}
+}
+
+func (w *WrappedServer) JoinLAN(addr string) {
+ w.s.JoinLAN(w.t, addr)
+}
+
+func (w *WrappedServer) JoinWAN(addr string) {
+ w.s.JoinWAN(w.t, addr)
+}
+
+func (w *WrappedServer) SetKV(key string, val []byte) {
+ w.s.SetKV(w.t, key, val)
+}
+
+func (w *WrappedServer) SetKVString(key string, val string) {
+ w.s.SetKVString(w.t, key, val)
+}
+
+func (w *WrappedServer) GetKV(key string) []byte {
+ return w.s.GetKV(w.t, key)
+}
+
+func (w *WrappedServer) GetKVString(key string) string {
+ return w.s.GetKVString(w.t, key)
+}
+
+func (w *WrappedServer) PopulateKV(data map[string][]byte) {
+ w.s.PopulateKV(w.t, data)
+}
+
+func (w *WrappedServer) ListKV(prefix string) []string {
+ return w.s.ListKV(w.t, prefix)
+}
+
+func (w *WrappedServer) AddService(name, status string, tags []string) {
+ w.s.AddService(w.t, name, status, tags)
+}
+
+func (w *WrappedServer) AddAddressableService(name, status, address string, port int, tags []string) {
+ w.s.AddAddressableService(w.t, name, status, address, port, tags)
+}
+
+func (w *WrappedServer) AddCheck(name, serviceID, status string) {
+ w.s.AddCheck(w.t, name, serviceID, status)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/wait.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/wait.go
deleted file mode 100644
index bd240796..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/testutil/wait.go
+++ /dev/null
@@ -1,62 +0,0 @@
-package testutil
-
-import (
- "fmt"
- "testing"
- "time"
-
- "github.com/hashicorp/consul/consul/structs"
-)
-
-type testFn func() (bool, error)
-type errorFn func(error)
-
-const (
- baseWait = 1 * time.Millisecond
- maxWait = 100 * time.Millisecond
-)
-
-func WaitForResult(try testFn, fail errorFn) {
- var err error
- wait := baseWait
- for retries := 100; retries > 0; retries-- {
- var success bool
- success, err = try()
- if success {
- time.Sleep(25 * time.Millisecond)
- return
- }
-
- time.Sleep(wait)
- wait *= 2
- if wait > maxWait {
- wait = maxWait
- }
- }
- fail(err)
-}
-
-type rpcFn func(string, interface{}, interface{}) error
-
-func WaitForLeader(t *testing.T, rpc rpcFn, dc string) structs.IndexedNodes {
- var out structs.IndexedNodes
- WaitForResult(func() (bool, error) {
- // Ensure we have a leader and a node registration.
- args := &structs.DCSpecificRequest{
- Datacenter: dc,
- }
- if err := rpc("Catalog.ListNodes", args, &out); err != nil {
- return false, fmt.Errorf("Catalog.ListNodes failed: %v", err)
- }
- if !out.QueryMeta.KnownLeader {
- return false, fmt.Errorf("No leader")
- }
- if out.Index == 0 {
- return false, fmt.Errorf("Consul index is 0")
- }
- return true, nil
- }, func(err error) {
- t.Fatalf("failed to find leader: %v", err)
- })
- return out
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/README.md
deleted file mode 100644
index da662f4a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Consul `types` Package
-
-The Go language has a strong type system built into the language. The
-`types` package corrals named types into a single package that is terminal in
-`go`'s import graph. The `types` package should not have any downstream
-dependencies. Each subsystem that defines its own set of types exists in its
-own file, but all types are defined in the same package.
-
-# Why
-
-> Everything should be made as simple as possible, but not simpler.
-
-`string` is a useful container and underlying type for identifiers, however
-the `string` type is effectively opaque to the compiler in terms of how a
-given string is intended to be used. For instance, there is nothing
-preventing the following from happening:
-
-```go
-// `map` of Widgets, looked up by ID
-var widgetLookup map[string]*Widget
-// ...
-var widgetID string = "widgetID"
-w, found := widgetLookup[widgetID]
-
-// Bad!
-var widgetName string = "name of widget"
-w, found := widgetLookup[widgetName]
-```
-
-but this class of problem is entirely preventable:
-
-```go
-type WidgetID string
-var widgetLookup map[WidgetID]*Widget
-var widgetName
-```
-
-TL;DR: intentions and idioms aren't statically checked by compilers. The
-`types` package uses Go's strong type system to prevent this class of bug.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/checks.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/checks.go
deleted file mode 100644
index 25a136b4..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/checks.go
+++ /dev/null
@@ -1,5 +0,0 @@
-package types
-
-// CheckID is a strongly typed string used to uniquely represent a Consul
-// Check on an Agent (a CheckID is not globally unique).
-type CheckID string
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/node_id.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/node_id.go
deleted file mode 100644
index c0588ed4..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/consul/types/node_id.go
+++ /dev/null
@@ -1,4 +0,0 @@
-package types
-
-// NodeID is a unique identifier for a node across space and time.
-type NodeID string
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/LICENSE
new file mode 100644
index 00000000..abaf1e45
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2017 HashiCorp
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/README.md
new file mode 100644
index 00000000..614342b2
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/README.md
@@ -0,0 +1,123 @@
+# go-hclog
+
+[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs]
+
+[godocs]: https://godoc.org/github.com/hashicorp/go-hclog
+
+`go-hclog` is a package for Go that provides a simple key/value logging
+interface for use in development and production environments.
+
+It provides logging levels that provide decreased output based upon the
+desired amount of output, unlike the standard library `log` package.
+
+It does not provide `Printf` style logging, only key/value logging that is
+exposed as arguments to the logging functions for simplicity.
+
+It provides a human readable output mode for use in development as well as
+JSON output mode for production.
+
+## Stability Note
+
+While this library is fully open source and HashiCorp will be maintaining it
+(since we are and will be making extensive use of it), the API and output
+format is subject to minor changes as we fully bake and vet it in our projects.
+This notice will be removed once it's fully integrated into our major projects
+and no further changes are anticipated.
+
+## Installation and Docs
+
+Install using `go get github.com/hashicorp/go-hclog`.
+
+Full documentation is available at
+http://godoc.org/github.com/hashicorp/go-hclog
+
+## Usage
+
+### Use the global logger
+
+```go
+hclog.Default().Info("hello world")
+```
+
+```text
+2017-07-05T16:15:55.167-0700 [INFO ] hello world
+```
+
+(Note timestamps are removed in future examples for brevity.)
+
+### Create a new logger
+
+```go
+appLogger := hclog.New(&hclog.LoggerOptions{
+ Name: "my-app",
+ Level: hclog.LevelFromString("DEBUG"),
+})
+```
+
+### Emit an Info level message with 2 key/value pairs
+
+```go
+input := "5.5"
+_, err := strconv.ParseInt(input, 10, 32)
+if err != nil {
+ appLogger.Info("Invalid input for ParseInt", "input", input, "error", err)
+}
+```
+
+```text
+... [INFO ] my-app: Invalid input for ParseInt: input=5.5 error="strconv.ParseInt: parsing "5.5": invalid syntax"
+```
+
+### Create a new Logger for a major subsystem
+
+```go
+subsystemLogger := appLogger.Named("transport")
+subsystemLogger.Info("we are transporting something")
+```
+
+```text
+... [INFO ] my-app.transport: we are transporting something
+```
+
+Notice that logs emitted by `subsystemLogger` contain `my-app.transport`,
+reflecting both the application and subsystem names.
+
+### Create a new Logger with fixed key/value pairs
+
+Using `With()` will include a specific key-value pair in all messages emitted
+by that logger.
+
+```go
+requestID := "5fb446b6-6eba-821d-df1b-cd7501b6a363"
+requestLogger := subsystemLogger.With("request", requestID)
+requestLogger.Info("we are transporting a request")
+```
+
+```text
+... [INFO ] my-app.transport: we are transporting a request: request=5fb446b6-6eba-821d-df1b-cd7501b6a363
+```
+
+This allows sub Loggers to be context specific without having to thread that
+into all the callers.
+
+### Use this with code that uses the standard library logger
+
+If you want to use the standard library's `log.Logger` interface you can wrap
+`hclog.Logger` by calling the `StandardLogger()` method. This allows you to use
+it with the familiar `Println()`, `Printf()`, etc. For example:
+
+```go
+stdLogger := appLogger.StandardLogger(&hclog.StandardLoggerOptions{
+ InferLevels: true,
+})
+// Printf() is provided by stdlib log.Logger interface, not hclog.Logger
+stdLogger.Printf("[DEBUG] %+v", stdLogger)
+```
+
+```text
+... [DEBUG] my-app: &{mu:{state:0 sema:0} prefix: flag:0 out:0xc42000a0a0 buf:[]}
+```
+
+Notice that if `appLogger` is initialized with the `INFO` log level _and_ you
+specify `InferLevels: true`, you will not see any output here. You must change
+`appLogger` to `DEBUG` to see output. See the docs for more information.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/global.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/global.go
new file mode 100644
index 00000000..55ce4396
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/global.go
@@ -0,0 +1,34 @@
+package hclog
+
+import (
+ "sync"
+)
+
+var (
+ protect sync.Once
+ def Logger
+
+ // The options used to create the Default logger. These are
+ // read only when the Default logger is created, so set them
+ // as soon as the process starts.
+ DefaultOptions = &LoggerOptions{
+ Level: DefaultLevel,
+ Output: DefaultOutput,
+ }
+)
+
+// Return a logger that is held globally. This can be a good starting
+// place, and then you can use .With() and .Name() to create sub-loggers
+// to be used in more specific contexts.
+func Default() Logger {
+ protect.Do(func() {
+ def = New(DefaultOptions)
+ })
+
+ return def
+}
+
+// A short alias for Default()
+func L() Logger {
+ return Default()
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/int.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/int.go
new file mode 100644
index 00000000..9f90c287
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/int.go
@@ -0,0 +1,385 @@
+package hclog
+
+import (
+ "bufio"
+ "encoding/json"
+ "fmt"
+ "log"
+ "os"
+ "runtime"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+)
+
+var (
+ _levelToBracket = map[Level]string{
+ Debug: "[DEBUG]",
+ Trace: "[TRACE]",
+ Info: "[INFO ]",
+ Warn: "[WARN ]",
+ Error: "[ERROR]",
+ }
+)
+
+// Given the options (nil for defaults), create a new Logger
+func New(opts *LoggerOptions) Logger {
+ if opts == nil {
+ opts = &LoggerOptions{}
+ }
+
+ output := opts.Output
+ if output == nil {
+ output = os.Stderr
+ }
+
+ level := opts.Level
+ if level == NoLevel {
+ level = DefaultLevel
+ }
+
+ return &intLogger{
+ m: new(sync.Mutex),
+ json: opts.JSONFormat,
+ caller: opts.IncludeLocation,
+ name: opts.Name,
+ w: bufio.NewWriter(output),
+ level: level,
+ }
+}
+
+// The internal logger implementation. Internal in that it is defined entirely
+// by this package.
+type intLogger struct {
+ json bool
+ caller bool
+ name string
+
+ // this is a pointer so that it's shared by any derived loggers, since
+ // those derived loggers share the bufio.Writer as well.
+ m *sync.Mutex
+ w *bufio.Writer
+ level Level
+
+ implied []interface{}
+}
+
+// Make sure that intLogger is a Logger
+var _ Logger = &intLogger{}
+
+// The time format to use for logging. This is a version of RFC3339 that
+// contains millisecond precision
+const TimeFormat = "2006-01-02T15:04:05.000Z0700"
+
+// Log a message and a set of key/value pairs if the given level is at
+// or more severe that the threshold configured in the Logger.
+func (z *intLogger) Log(level Level, msg string, args ...interface{}) {
+ if level < z.level {
+ return
+ }
+
+ t := time.Now()
+
+ z.m.Lock()
+ defer z.m.Unlock()
+
+ if z.json {
+ z.logJson(t, level, msg, args...)
+ } else {
+ z.log(t, level, msg, args...)
+ }
+
+ z.w.Flush()
+}
+
+// Cleanup a path by returning the last 2 segments of the path only.
+func trimCallerPath(path string) string {
+ // lovely borrowed from zap
+ // nb. To make sure we trim the path correctly on Windows too, we
+ // counter-intuitively need to use '/' and *not* os.PathSeparator here,
+ // because the path given originates from Go stdlib, specifically
+ // runtime.Caller() which (as of Mar/17) returns forward slashes even on
+ // Windows.
+ //
+ // See https://github.com/golang/go/issues/3335
+ // and https://github.com/golang/go/issues/18151
+ //
+ // for discussion on the issue on Go side.
+ //
+
+ // Find the last separator.
+ //
+ idx := strings.LastIndexByte(path, '/')
+ if idx == -1 {
+ return path
+ }
+
+ // Find the penultimate separator.
+ idx = strings.LastIndexByte(path[:idx], '/')
+ if idx == -1 {
+ return path
+ }
+
+ return path[idx+1:]
+}
+
+// Non-JSON logging format function
+func (z *intLogger) log(t time.Time, level Level, msg string, args ...interface{}) {
+ z.w.WriteString(t.Format(TimeFormat))
+ z.w.WriteByte(' ')
+
+ s, ok := _levelToBracket[level]
+ if ok {
+ z.w.WriteString(s)
+ } else {
+ z.w.WriteString("[UNKN ]")
+ }
+
+ if z.caller {
+ if _, file, line, ok := runtime.Caller(3); ok {
+ z.w.WriteByte(' ')
+ z.w.WriteString(trimCallerPath(file))
+ z.w.WriteByte(':')
+ z.w.WriteString(strconv.Itoa(line))
+ z.w.WriteByte(':')
+ }
+ }
+
+ z.w.WriteByte(' ')
+
+ if z.name != "" {
+ z.w.WriteString(z.name)
+ z.w.WriteString(": ")
+ }
+
+ z.w.WriteString(msg)
+
+ args = append(z.implied, args...)
+
+ var stacktrace CapturedStacktrace
+
+ if args != nil && len(args) > 0 {
+ if len(args)%2 != 0 {
+ cs, ok := args[len(args)-1].(CapturedStacktrace)
+ if ok {
+ args = args[:len(args)-1]
+ stacktrace = cs
+ } else {
+ args = append(args, "<unknown>")
+ }
+ }
+
+ z.w.WriteByte(':')
+
+ FOR:
+ for i := 0; i < len(args); i = i + 2 {
+ var val string
+
+ switch st := args[i+1].(type) {
+ case string:
+ val = st
+ case int:
+ val = strconv.FormatInt(int64(st), 10)
+ case int64:
+ val = strconv.FormatInt(int64(st), 10)
+ case int32:
+ val = strconv.FormatInt(int64(st), 10)
+ case int16:
+ val = strconv.FormatInt(int64(st), 10)
+ case int8:
+ val = strconv.FormatInt(int64(st), 10)
+ case uint:
+ val = strconv.FormatUint(uint64(st), 10)
+ case uint64:
+ val = strconv.FormatUint(uint64(st), 10)
+ case uint32:
+ val = strconv.FormatUint(uint64(st), 10)
+ case uint16:
+ val = strconv.FormatUint(uint64(st), 10)
+ case uint8:
+ val = strconv.FormatUint(uint64(st), 10)
+ case CapturedStacktrace:
+ stacktrace = st
+ continue FOR
+ default:
+ val = fmt.Sprintf("%v", st)
+ }
+
+ z.w.WriteByte(' ')
+ z.w.WriteString(args[i].(string))
+ z.w.WriteByte('=')
+
+ if strings.ContainsAny(val, " \t\n\r") {
+ z.w.WriteByte('"')
+ z.w.WriteString(val)
+ z.w.WriteByte('"')
+ } else {
+ z.w.WriteString(val)
+ }
+ }
+ }
+
+ z.w.WriteString("\n")
+
+ if stacktrace != "" {
+ z.w.WriteString(string(stacktrace))
+ }
+}
+
+// JSON logging function
+func (z *intLogger) logJson(t time.Time, level Level, msg string, args ...interface{}) {
+ vals := map[string]interface{}{
+ "@message": msg,
+ "@timestamp": t.Format("2006-01-02T15:04:05.000000Z07:00"),
+ }
+
+ var levelStr string
+ switch level {
+ case Error:
+ levelStr = "error"
+ case Warn:
+ levelStr = "warn"
+ case Info:
+ levelStr = "info"
+ case Debug:
+ levelStr = "debug"
+ case Trace:
+ levelStr = "trace"
+ default:
+ levelStr = "all"
+ }
+
+ vals["@level"] = levelStr
+
+ if z.name != "" {
+ vals["@module"] = z.name
+ }
+
+ if z.caller {
+ if _, file, line, ok := runtime.Caller(3); ok {
+ vals["@caller"] = fmt.Sprintf("%s:%d", file, line)
+ }
+ }
+
+ if args != nil && len(args) > 0 {
+ if len(args)%2 != 0 {
+ cs, ok := args[len(args)-1].(CapturedStacktrace)
+ if ok {
+ args = args[:len(args)-1]
+ vals["stacktrace"] = cs
+ } else {
+ args = append(args, "<unknown>")
+ }
+ }
+
+ for i := 0; i < len(args); i = i + 2 {
+ if _, ok := args[i].(string); !ok {
+ // As this is the logging function not much we can do here
+ // without injecting into logs...
+ continue
+ }
+ vals[args[i].(string)] = args[i+1]
+ }
+ }
+
+ err := json.NewEncoder(z.w).Encode(vals)
+ if err != nil {
+ panic(err)
+ }
+}
+
+// Emit the message and args at DEBUG level
+func (z *intLogger) Debug(msg string, args ...interface{}) {
+ z.Log(Debug, msg, args...)
+}
+
+// Emit the message and args at TRACE level
+func (z *intLogger) Trace(msg string, args ...interface{}) {
+ z.Log(Trace, msg, args...)
+}
+
+// Emit the message and args at INFO level
+func (z *intLogger) Info(msg string, args ...interface{}) {
+ z.Log(Info, msg, args...)
+}
+
+// Emit the message and args at WARN level
+func (z *intLogger) Warn(msg string, args ...interface{}) {
+ z.Log(Warn, msg, args...)
+}
+
+// Emit the message and args at ERROR level
+func (z *intLogger) Error(msg string, args ...interface{}) {
+ z.Log(Error, msg, args...)
+}
+
+// Indicate that the logger would emit TRACE level logs
+func (z *intLogger) IsTrace() bool {
+ return z.level == Trace
+}
+
+// Indicate that the logger would emit DEBUG level logs
+func (z *intLogger) IsDebug() bool {
+ return z.level <= Debug
+}
+
+// Indicate that the logger would emit INFO level logs
+func (z *intLogger) IsInfo() bool {
+ return z.level <= Info
+}
+
+// Indicate that the logger would emit WARN level logs
+func (z *intLogger) IsWarn() bool {
+ return z.level <= Warn
+}
+
+// Indicate that the logger would emit ERROR level logs
+func (z *intLogger) IsError() bool {
+ return z.level <= Error
+}
+
+// Return a sub-Logger for which every emitted log message will contain
+// the given key/value pairs. This is used to create a context specific
+// Logger.
+func (z *intLogger) With(args ...interface{}) Logger {
+ var nz intLogger = *z
+
+ nz.implied = append(nz.implied, args...)
+
+ return &nz
+}
+
+// Create a new sub-Logger that a name decending from the current name.
+// This is used to create a subsystem specific Logger.
+func (z *intLogger) Named(name string) Logger {
+ var nz intLogger = *z
+
+ if nz.name != "" {
+ nz.name = nz.name + "." + name
+ }
+
+ return &nz
+}
+
+// Create a new sub-Logger with an explicit name. This ignores the current
+// name. This is used to create a standalone logger that doesn't fall
+// within the normal hierarchy.
+func (z *intLogger) ResetNamed(name string) Logger {
+ var nz intLogger = *z
+
+ nz.name = name
+
+ return &nz
+}
+
+// Create a *log.Logger that will send it's data through this Logger. This
+// allows packages that expect to be using the standard library log to actually
+// use this logger.
+func (z *intLogger) StandardLogger(opts *StandardLoggerOptions) *log.Logger {
+ if opts == nil {
+ opts = &StandardLoggerOptions{}
+ }
+
+ return log.New(&stdlogAdapter{z, opts.InferLevels}, "", 0)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/log.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/log.go
new file mode 100644
index 00000000..6bb16ba7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/log.go
@@ -0,0 +1,138 @@
+package hclog
+
+import (
+ "io"
+ "log"
+ "os"
+ "strings"
+)
+
+var (
+ DefaultOutput = os.Stderr
+ DefaultLevel = Info
+)
+
+type Level int
+
+const (
+ // This is a special level used to indicate that no level has been
+ // set and allow for a default to be used.
+ NoLevel Level = 0
+
+ // The most verbose level. Intended to be used for the tracing of actions
+ // in code, such as function enters/exits, etc.
+ Trace Level = 1
+
+ // For programmer lowlevel analysis.
+ Debug Level = 2
+
+ // For information about steady state operations.
+ Info Level = 3
+
+ // For information about rare but handled events.
+ Warn Level = 4
+
+ // For information about unrecoverable events.
+ Error Level = 5
+)
+
+// LevelFromString returns a Level type for the named log level, or "NoLevel" if
+// the level string is invalid. This facilitates setting the log level via
+// config or environment variable by name in a predictable way.
+func LevelFromString(levelStr string) Level {
+ // We don't care about case. Accept "INFO" or "info"
+ levelStr = strings.ToLower(strings.TrimSpace(levelStr))
+ switch levelStr {
+ case "trace":
+ return Trace
+ case "debug":
+ return Debug
+ case "info":
+ return Info
+ case "warn":
+ return Warn
+ case "error":
+ return Error
+ default:
+ return NoLevel
+ }
+}
+
+// The main Logger interface. All code should code against this interface only.
+type Logger interface {
+ // Args are alternating key, val pairs
+ // keys must be strings
+ // vals can be any type, but display is implementation specific
+ // Emit a message and key/value pairs at the TRACE level
+ Trace(msg string, args ...interface{})
+
+ // Emit a message and key/value pairs at the DEBUG level
+ Debug(msg string, args ...interface{})
+
+ // Emit a message and key/value pairs at the INFO level
+ Info(msg string, args ...interface{})
+
+ // Emit a message and key/value pairs at the WARN level
+ Warn(msg string, args ...interface{})
+
+ // Emit a message and key/value pairs at the ERROR level
+ Error(msg string, args ...interface{})
+
+ // Indicate if TRACE logs would be emitted. This and the other Is* guards
+ // are used to elide expensive logging code based on the current level.
+ IsTrace() bool
+
+ // Indicate if DEBUG logs would be emitted. This and the other Is* guards
+ IsDebug() bool
+
+ // Indicate if INFO logs would be emitted. This and the other Is* guards
+ IsInfo() bool
+
+ // Indicate if WARN logs would be emitted. This and the other Is* guards
+ IsWarn() bool
+
+ // Indicate if ERROR logs would be emitted. This and the other Is* guards
+ IsError() bool
+
+ // Creates a sublogger that will always have the given key/value pairs
+ With(args ...interface{}) Logger
+
+ // Create a logger that will prepend the name string on the front of all messages.
+ // If the logger already has a name, the new value will be appended to the current
+ // name. That way, a major subsystem can use this to decorate all it's own logs
+ // without losing context.
+ Named(name string) Logger
+
+ // Create a logger that will prepend the name string on the front of all messages.
+ // This sets the name of the logger to the value directly, unlike Named which honor
+ // the current name as well.
+ ResetNamed(name string) Logger
+
+ // Return a value that conforms to the stdlib log.Logger interface
+ StandardLogger(opts *StandardLoggerOptions) *log.Logger
+}
+
+type StandardLoggerOptions struct {
+ // Indicate that some minimal parsing should be done on strings to try
+ // and detect their level and re-emit them.
+ // This supports the strings like [ERROR], [ERR] [TRACE], [WARN], [INFO],
+ // [DEBUG] and strip it off before reapplying it.
+ InferLevels bool
+}
+
+type LoggerOptions struct {
+ // Name of the subsystem to prefix logs with
+ Name string
+
+ // The threshold for the logger. Anything less severe is supressed
+ Level Level
+
+ // Where to write the logs to. Defaults to os.Stdout if nil
+ Output io.Writer
+
+ // Control if the output should be in JSON.
+ JSONFormat bool
+
+ // Intclude file and line information in each log line
+ IncludeLocation bool
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stacktrace.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stacktrace.go
new file mode 100644
index 00000000..8af1a3be
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stacktrace.go
@@ -0,0 +1,108 @@
+// Copyright (c) 2016 Uber Technologies, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+
+package hclog
+
+import (
+ "bytes"
+ "runtime"
+ "strconv"
+ "strings"
+ "sync"
+)
+
+var (
+ _stacktraceIgnorePrefixes = []string{
+ "runtime.goexit",
+ "runtime.main",
+ }
+ _stacktracePool = sync.Pool{
+ New: func() interface{} {
+ return newProgramCounters(64)
+ },
+ }
+)
+
+// A stacktrace gathered by a previous call to log.Stacktrace. If passed
+// to a logging function, the stacktrace will be appended.
+type CapturedStacktrace string
+
+// Gather a stacktrace of the current goroutine and return it to be passed
+// to a logging function.
+func Stacktrace() CapturedStacktrace {
+ return CapturedStacktrace(takeStacktrace())
+}
+
+func takeStacktrace() string {
+ programCounters := _stacktracePool.Get().(*programCounters)
+ defer _stacktracePool.Put(programCounters)
+
+ var buffer bytes.Buffer
+
+ for {
+ // Skip the call to runtime.Counters and takeStacktrace so that the
+ // program counters start at the caller of takeStacktrace.
+ n := runtime.Callers(2, programCounters.pcs)
+ if n < cap(programCounters.pcs) {
+ programCounters.pcs = programCounters.pcs[:n]
+ break
+ }
+ // Don't put the too-short counter slice back into the pool; this lets
+ // the pool adjust if we consistently take deep stacktraces.
+ programCounters = newProgramCounters(len(programCounters.pcs) * 2)
+ }
+
+ i := 0
+ frames := runtime.CallersFrames(programCounters.pcs)
+ for frame, more := frames.Next(); more; frame, more = frames.Next() {
+ if shouldIgnoreStacktraceFunction(frame.Function) {
+ continue
+ }
+ if i != 0 {
+ buffer.WriteByte('\n')
+ }
+ i++
+ buffer.WriteString(frame.Function)
+ buffer.WriteByte('\n')
+ buffer.WriteByte('\t')
+ buffer.WriteString(frame.File)
+ buffer.WriteByte(':')
+ buffer.WriteString(strconv.Itoa(int(frame.Line)))
+ }
+
+ return buffer.String()
+}
+
+func shouldIgnoreStacktraceFunction(function string) bool {
+ for _, prefix := range _stacktraceIgnorePrefixes {
+ if strings.HasPrefix(function, prefix) {
+ return true
+ }
+ }
+ return false
+}
+
+type programCounters struct {
+ pcs []uintptr
+}
+
+func newProgramCounters(size int) *programCounters {
+ return &programCounters{make([]uintptr, size)}
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stdlog.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stdlog.go
new file mode 100644
index 00000000..2bb927fc
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-hclog/stdlog.go
@@ -0,0 +1,62 @@
+package hclog
+
+import (
+ "bytes"
+ "strings"
+)
+
+// Provides a io.Writer to shim the data out of *log.Logger
+// and back into our Logger. This is basically the only way to
+// build upon *log.Logger.
+type stdlogAdapter struct {
+ hl Logger
+ inferLevels bool
+}
+
+// Take the data, infer the levels if configured, and send it through
+// a regular Logger
+func (s *stdlogAdapter) Write(data []byte) (int, error) {
+ str := string(bytes.TrimRight(data, " \t\n"))
+
+ if s.inferLevels {
+ level, str := s.pickLevel(str)
+ switch level {
+ case Trace:
+ s.hl.Trace(str)
+ case Debug:
+ s.hl.Debug(str)
+ case Info:
+ s.hl.Info(str)
+ case Warn:
+ s.hl.Warn(str)
+ case Error:
+ s.hl.Error(str)
+ default:
+ s.hl.Info(str)
+ }
+ } else {
+ s.hl.Info(str)
+ }
+
+ return len(data), nil
+}
+
+// Detect, based on conventions, what log level this is
+func (s *stdlogAdapter) pickLevel(str string) (Level, string) {
+ switch {
+ case strings.HasPrefix(str, "[DEBUG]"):
+ return Debug, strings.TrimSpace(str[7:])
+ case strings.HasPrefix(str, "[TRACE]"):
+ return Trace, strings.TrimSpace(str[7:])
+ case strings.HasPrefix(str, "[INFO]"):
+ return Info, strings.TrimSpace(str[6:])
+ case strings.HasPrefix(str, "[WARN]"):
+ return Warn, strings.TrimSpace(str[7:])
+ case strings.HasPrefix(str, "[ERROR]"):
+ return Error, strings.TrimSpace(str[7:])
+ case strings.HasPrefix(str, "[ERR]"):
+ return Error, strings.TrimSpace(str[5:])
+ default:
+ return Info, str
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/LICENSE
deleted file mode 100644
index ccae99f6..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/LICENSE
+++ /dev/null
@@ -1,25 +0,0 @@
-Copyright (c) 2012, 2013 Ugorji Nwoke.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without modification,
-are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice,
- this list of conditions and the following disclaimer.
-* Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-* Neither the name of the author nor the names of its contributors may be used
- to endorse or promote products derived from this software
- without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
-ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
-ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/0doc.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/0doc.go
deleted file mode 100644
index c14d810a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/0doc.go
+++ /dev/null
@@ -1,143 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-/*
-High Performance, Feature-Rich Idiomatic Go encoding library for msgpack and binc .
-
-Supported Serialization formats are:
-
- - msgpack: [https://github.com/msgpack/msgpack]
- - binc: [http://github.com/ugorji/binc]
-
-To install:
-
- go get github.com/ugorji/go/codec
-
-The idiomatic Go support is as seen in other encoding packages in
-the standard library (ie json, xml, gob, etc).
-
-Rich Feature Set includes:
-
- - Simple but extremely powerful and feature-rich API
- - Very High Performance.
- Our extensive benchmarks show us outperforming Gob, Json and Bson by 2-4X.
- This was achieved by taking extreme care on:
- - managing allocation
- - function frame size (important due to Go's use of split stacks),
- - reflection use (and by-passing reflection for common types)
- - recursion implications
- - zero-copy mode (encoding/decoding to byte slice without using temp buffers)
- - Correct.
- Care was taken to precisely handle corner cases like:
- overflows, nil maps and slices, nil value in stream, etc.
- - Efficient zero-copying into temporary byte buffers
- when encoding into or decoding from a byte slice.
- - Standard field renaming via tags
- - Encoding from any value
- (struct, slice, map, primitives, pointers, interface{}, etc)
- - Decoding into pointer to any non-nil typed value
- (struct, slice, map, int, float32, bool, string, reflect.Value, etc)
- - Supports extension functions to handle the encode/decode of custom types
- - Support Go 1.2 encoding.BinaryMarshaler/BinaryUnmarshaler
- - Schema-less decoding
- (decode into a pointer to a nil interface{} as opposed to a typed non-nil value).
- Includes Options to configure what specific map or slice type to use
- when decoding an encoded list or map into a nil interface{}
- - Provides a RPC Server and Client Codec for net/rpc communication protocol.
- - Msgpack Specific:
- - Provides extension functions to handle spec-defined extensions (binary, timestamp)
- - Options to resolve ambiguities in handling raw bytes (as string or []byte)
- during schema-less decoding (decoding into a nil interface{})
- - RPC Server/Client Codec for msgpack-rpc protocol defined at:
- https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
- - Fast Paths for some container types:
- For some container types, we circumvent reflection and its associated overhead
- and allocation costs, and encode/decode directly. These types are:
- []interface{}
- []int
- []string
- map[interface{}]interface{}
- map[int]interface{}
- map[string]interface{}
-
-Extension Support
-
-Users can register a function to handle the encoding or decoding of
-their custom types.
-
-There are no restrictions on what the custom type can be. Some examples:
-
- type BisSet []int
- type BitSet64 uint64
- type UUID string
- type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
- type GifImage struct { ... }
-
-As an illustration, MyStructWithUnexportedFields would normally be
-encoded as an empty map because it has no exported fields, while UUID
-would be encoded as a string. However, with extension support, you can
-encode any of these however you like.
-
-RPC
-
-RPC Client and Server Codecs are implemented, so the codecs can be used
-with the standard net/rpc package.
-
-Usage
-
-Typical usage model:
-
- // create and configure Handle
- var (
- bh codec.BincHandle
- mh codec.MsgpackHandle
- )
-
- mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
-
- // configure extensions
- // e.g. for msgpack, define functions and enable Time support for tag 1
- // mh.AddExt(reflect.TypeOf(time.Time{}), 1, myMsgpackTimeEncodeExtFn, myMsgpackTimeDecodeExtFn)
-
- // create and use decoder/encoder
- var (
- r io.Reader
- w io.Writer
- b []byte
- h = &bh // or mh to use msgpack
- )
-
- dec = codec.NewDecoder(r, h)
- dec = codec.NewDecoderBytes(b, h)
- err = dec.Decode(&v)
-
- enc = codec.NewEncoder(w, h)
- enc = codec.NewEncoderBytes(&b, h)
- err = enc.Encode(v)
-
- //RPC Server
- go func() {
- for {
- conn, err := listener.Accept()
- rpcCodec := codec.GoRpc.ServerCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
- rpc.ServeCodec(rpcCodec)
- }
- }()
-
- //RPC Communication (client side)
- conn, err = net.Dial("tcp", "localhost:5555")
- rpcCodec := codec.GoRpc.ClientCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
- client := rpc.NewClientWithCodec(rpcCodec)
-
-Representative Benchmark Results
-
-Run the benchmark suite using:
- go test -bi -bench=. -benchmem
-
-To run full benchmark suite (including against vmsgpack and bson),
-see notes in ext_dep_test.go
-
-*/
-package codec
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/README.md
deleted file mode 100644
index 6c95d1bf..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/README.md
+++ /dev/null
@@ -1,174 +0,0 @@
-# Codec
-
-High Performance and Feature-Rich Idiomatic Go Library providing
-encode/decode support for different serialization formats.
-
-Supported Serialization formats are:
-
- - msgpack: [https://github.com/msgpack/msgpack]
- - binc: [http://github.com/ugorji/binc]
-
-To install:
-
- go get github.com/ugorji/go/codec
-
-Online documentation: [http://godoc.org/github.com/ugorji/go/codec]
-
-The idiomatic Go support is as seen in other encoding packages in
-the standard library (ie json, xml, gob, etc).
-
-Rich Feature Set includes:
-
- - Simple but extremely powerful and feature-rich API
- - Very High Performance.
- Our extensive benchmarks show us outperforming Gob, Json and Bson by 2-4X.
- This was achieved by taking extreme care on:
- - managing allocation
- - function frame size (important due to Go's use of split stacks),
- - reflection use (and by-passing reflection for common types)
- - recursion implications
- - zero-copy mode (encoding/decoding to byte slice without using temp buffers)
- - Correct.
- Care was taken to precisely handle corner cases like:
- overflows, nil maps and slices, nil value in stream, etc.
- - Efficient zero-copying into temporary byte buffers
- when encoding into or decoding from a byte slice.
- - Standard field renaming via tags
- - Encoding from any value
- (struct, slice, map, primitives, pointers, interface{}, etc)
- - Decoding into pointer to any non-nil typed value
- (struct, slice, map, int, float32, bool, string, reflect.Value, etc)
- - Supports extension functions to handle the encode/decode of custom types
- - Support Go 1.2 encoding.BinaryMarshaler/BinaryUnmarshaler
- - Schema-less decoding
- (decode into a pointer to a nil interface{} as opposed to a typed non-nil value).
- Includes Options to configure what specific map or slice type to use
- when decoding an encoded list or map into a nil interface{}
- - Provides a RPC Server and Client Codec for net/rpc communication protocol.
- - Msgpack Specific:
- - Provides extension functions to handle spec-defined extensions (binary, timestamp)
- - Options to resolve ambiguities in handling raw bytes (as string or []byte)
- during schema-less decoding (decoding into a nil interface{})
- - RPC Server/Client Codec for msgpack-rpc protocol defined at:
- https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
- - Fast Paths for some container types:
- For some container types, we circumvent reflection and its associated overhead
- and allocation costs, and encode/decode directly. These types are:
- []interface{}
- []int
- []string
- map[interface{}]interface{}
- map[int]interface{}
- map[string]interface{}
-
-## Extension Support
-
-Users can register a function to handle the encoding or decoding of
-their custom types.
-
-There are no restrictions on what the custom type can be. Some examples:
-
- type BisSet []int
- type BitSet64 uint64
- type UUID string
- type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
- type GifImage struct { ... }
-
-As an illustration, MyStructWithUnexportedFields would normally be
-encoded as an empty map because it has no exported fields, while UUID
-would be encoded as a string. However, with extension support, you can
-encode any of these however you like.
-
-## RPC
-
-RPC Client and Server Codecs are implemented, so the codecs can be used
-with the standard net/rpc package.
-
-## Usage
-
-Typical usage model:
-
- // create and configure Handle
- var (
- bh codec.BincHandle
- mh codec.MsgpackHandle
- )
-
- mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
-
- // configure extensions
- // e.g. for msgpack, define functions and enable Time support for tag 1
- // mh.AddExt(reflect.TypeOf(time.Time{}), 1, myMsgpackTimeEncodeExtFn, myMsgpackTimeDecodeExtFn)
-
- // create and use decoder/encoder
- var (
- r io.Reader
- w io.Writer
- b []byte
- h = &bh // or mh to use msgpack
- )
-
- dec = codec.NewDecoder(r, h)
- dec = codec.NewDecoderBytes(b, h)
- err = dec.Decode(&v)
-
- enc = codec.NewEncoder(w, h)
- enc = codec.NewEncoderBytes(&b, h)
- err = enc.Encode(v)
-
- //RPC Server
- go func() {
- for {
- conn, err := listener.Accept()
- rpcCodec := codec.GoRpc.ServerCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
- rpc.ServeCodec(rpcCodec)
- }
- }()
-
- //RPC Communication (client side)
- conn, err = net.Dial("tcp", "localhost:5555")
- rpcCodec := codec.GoRpc.ClientCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
- client := rpc.NewClientWithCodec(rpcCodec)
-
-## Representative Benchmark Results
-
-A sample run of benchmark using "go test -bi -bench=. -benchmem":
-
- /proc/cpuinfo: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz (HT)
-
- ..............................................
- BENCHMARK INIT: 2013-10-16 11:02:50.345970786 -0400 EDT
- To run full benchmark comparing encodings (MsgPack, Binc, JSON, GOB, etc), use: "go test -bench=."
- Benchmark:
- Struct recursive Depth: 1
- ApproxDeepSize Of benchmark Struct: 4694 bytes
- Benchmark One-Pass Run:
- v-msgpack: len: 1600 bytes
- bson: len: 3025 bytes
- msgpack: len: 1560 bytes
- binc: len: 1187 bytes
- gob: len: 1972 bytes
- json: len: 2538 bytes
- ..............................................
- PASS
- Benchmark__Msgpack____Encode 50000 54359 ns/op 14953 B/op 83 allocs/op
- Benchmark__Msgpack____Decode 10000 106531 ns/op 14990 B/op 410 allocs/op
- Benchmark__Binc_NoSym_Encode 50000 53956 ns/op 14966 B/op 83 allocs/op
- Benchmark__Binc_NoSym_Decode 10000 103751 ns/op 14529 B/op 386 allocs/op
- Benchmark__Binc_Sym___Encode 50000 65961 ns/op 17130 B/op 88 allocs/op
- Benchmark__Binc_Sym___Decode 10000 106310 ns/op 15857 B/op 287 allocs/op
- Benchmark__Gob________Encode 10000 135944 ns/op 21189 B/op 237 allocs/op
- Benchmark__Gob________Decode 5000 405390 ns/op 83460 B/op 1841 allocs/op
- Benchmark__Json_______Encode 20000 79412 ns/op 13874 B/op 102 allocs/op
- Benchmark__Json_______Decode 10000 247979 ns/op 14202 B/op 493 allocs/op
- Benchmark__Bson_______Encode 10000 121762 ns/op 27814 B/op 514 allocs/op
- Benchmark__Bson_______Decode 10000 162126 ns/op 16514 B/op 789 allocs/op
- Benchmark__VMsgpack___Encode 50000 69155 ns/op 12370 B/op 344 allocs/op
- Benchmark__VMsgpack___Decode 10000 151609 ns/op 20307 B/op 571 allocs/op
- ok ugorji.net/codec 30.827s
-
-To run full benchmark suite (including against vmsgpack and bson),
-see notes in ext\_dep\_test.go
-
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/binc.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/binc.go
deleted file mode 100644
index 2bb5e8fe..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/binc.go
+++ /dev/null
@@ -1,786 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import (
- "math"
- // "reflect"
- // "sync/atomic"
- "time"
- //"fmt"
-)
-
-const bincDoPrune = true // No longer needed. Needed before as C lib did not support pruning.
-
-//var _ = fmt.Printf
-
-// vd as low 4 bits (there are 16 slots)
-const (
- bincVdSpecial byte = iota
- bincVdPosInt
- bincVdNegInt
- bincVdFloat
-
- bincVdString
- bincVdByteArray
- bincVdArray
- bincVdMap
-
- bincVdTimestamp
- bincVdSmallInt
- bincVdUnicodeOther
- bincVdSymbol
-
- bincVdDecimal
- _ // open slot
- _ // open slot
- bincVdCustomExt = 0x0f
-)
-
-const (
- bincSpNil byte = iota
- bincSpFalse
- bincSpTrue
- bincSpNan
- bincSpPosInf
- bincSpNegInf
- bincSpZeroFloat
- bincSpZero
- bincSpNegOne
-)
-
-const (
- bincFlBin16 byte = iota
- bincFlBin32
- _ // bincFlBin32e
- bincFlBin64
- _ // bincFlBin64e
- // others not currently supported
-)
-
-type bincEncDriver struct {
- w encWriter
- m map[string]uint16 // symbols
- s uint32 // symbols sequencer
- b [8]byte
-}
-
-func (e *bincEncDriver) isBuiltinType(rt uintptr) bool {
- return rt == timeTypId
-}
-
-func (e *bincEncDriver) encodeBuiltin(rt uintptr, v interface{}) {
- switch rt {
- case timeTypId:
- bs := encodeTime(v.(time.Time))
- e.w.writen1(bincVdTimestamp<<4 | uint8(len(bs)))
- e.w.writeb(bs)
- }
-}
-
-func (e *bincEncDriver) encodeNil() {
- e.w.writen1(bincVdSpecial<<4 | bincSpNil)
-}
-
-func (e *bincEncDriver) encodeBool(b bool) {
- if b {
- e.w.writen1(bincVdSpecial<<4 | bincSpTrue)
- } else {
- e.w.writen1(bincVdSpecial<<4 | bincSpFalse)
- }
-}
-
-func (e *bincEncDriver) encodeFloat32(f float32) {
- if f == 0 {
- e.w.writen1(bincVdSpecial<<4 | bincSpZeroFloat)
- return
- }
- e.w.writen1(bincVdFloat<<4 | bincFlBin32)
- e.w.writeUint32(math.Float32bits(f))
-}
-
-func (e *bincEncDriver) encodeFloat64(f float64) {
- if f == 0 {
- e.w.writen1(bincVdSpecial<<4 | bincSpZeroFloat)
- return
- }
- bigen.PutUint64(e.b[:], math.Float64bits(f))
- if bincDoPrune {
- i := 7
- for ; i >= 0 && (e.b[i] == 0); i-- {
- }
- i++
- if i <= 6 {
- e.w.writen1(bincVdFloat<<4 | 0x8 | bincFlBin64)
- e.w.writen1(byte(i))
- e.w.writeb(e.b[:i])
- return
- }
- }
- e.w.writen1(bincVdFloat<<4 | bincFlBin64)
- e.w.writeb(e.b[:])
-}
-
-func (e *bincEncDriver) encIntegerPrune(bd byte, pos bool, v uint64, lim uint8) {
- if lim == 4 {
- bigen.PutUint32(e.b[:lim], uint32(v))
- } else {
- bigen.PutUint64(e.b[:lim], v)
- }
- if bincDoPrune {
- i := pruneSignExt(e.b[:lim], pos)
- e.w.writen1(bd | lim - 1 - byte(i))
- e.w.writeb(e.b[i:lim])
- } else {
- e.w.writen1(bd | lim - 1)
- e.w.writeb(e.b[:lim])
- }
-}
-
-func (e *bincEncDriver) encodeInt(v int64) {
- const nbd byte = bincVdNegInt << 4
- switch {
- case v >= 0:
- e.encUint(bincVdPosInt<<4, true, uint64(v))
- case v == -1:
- e.w.writen1(bincVdSpecial<<4 | bincSpNegOne)
- default:
- e.encUint(bincVdNegInt<<4, false, uint64(-v))
- }
-}
-
-func (e *bincEncDriver) encodeUint(v uint64) {
- e.encUint(bincVdPosInt<<4, true, v)
-}
-
-func (e *bincEncDriver) encUint(bd byte, pos bool, v uint64) {
- switch {
- case v == 0:
- e.w.writen1(bincVdSpecial<<4 | bincSpZero)
- case pos && v >= 1 && v <= 16:
- e.w.writen1(bincVdSmallInt<<4 | byte(v-1))
- case v <= math.MaxUint8:
- e.w.writen2(bd|0x0, byte(v))
- case v <= math.MaxUint16:
- e.w.writen1(bd | 0x01)
- e.w.writeUint16(uint16(v))
- case v <= math.MaxUint32:
- e.encIntegerPrune(bd, pos, v, 4)
- default:
- e.encIntegerPrune(bd, pos, v, 8)
- }
-}
-
-func (e *bincEncDriver) encodeExtPreamble(xtag byte, length int) {
- e.encLen(bincVdCustomExt<<4, uint64(length))
- e.w.writen1(xtag)
-}
-
-func (e *bincEncDriver) encodeArrayPreamble(length int) {
- e.encLen(bincVdArray<<4, uint64(length))
-}
-
-func (e *bincEncDriver) encodeMapPreamble(length int) {
- e.encLen(bincVdMap<<4, uint64(length))
-}
-
-func (e *bincEncDriver) encodeString(c charEncoding, v string) {
- l := uint64(len(v))
- e.encBytesLen(c, l)
- if l > 0 {
- e.w.writestr(v)
- }
-}
-
-func (e *bincEncDriver) encodeSymbol(v string) {
- // if WriteSymbolsNoRefs {
- // e.encodeString(c_UTF8, v)
- // return
- // }
-
- //symbols only offer benefit when string length > 1.
- //This is because strings with length 1 take only 2 bytes to store
- //(bd with embedded length, and single byte for string val).
-
- l := len(v)
- switch l {
- case 0:
- e.encBytesLen(c_UTF8, 0)
- return
- case 1:
- e.encBytesLen(c_UTF8, 1)
- e.w.writen1(v[0])
- return
- }
- if e.m == nil {
- e.m = make(map[string]uint16, 16)
- }
- ui, ok := e.m[v]
- if ok {
- if ui <= math.MaxUint8 {
- e.w.writen2(bincVdSymbol<<4, byte(ui))
- } else {
- e.w.writen1(bincVdSymbol<<4 | 0x8)
- e.w.writeUint16(ui)
- }
- } else {
- e.s++
- ui = uint16(e.s)
- //ui = uint16(atomic.AddUint32(&e.s, 1))
- e.m[v] = ui
- var lenprec uint8
- switch {
- case l <= math.MaxUint8:
- // lenprec = 0
- case l <= math.MaxUint16:
- lenprec = 1
- case int64(l) <= math.MaxUint32:
- lenprec = 2
- default:
- lenprec = 3
- }
- if ui <= math.MaxUint8 {
- e.w.writen2(bincVdSymbol<<4|0x0|0x4|lenprec, byte(ui))
- } else {
- e.w.writen1(bincVdSymbol<<4 | 0x8 | 0x4 | lenprec)
- e.w.writeUint16(ui)
- }
- switch lenprec {
- case 0:
- e.w.writen1(byte(l))
- case 1:
- e.w.writeUint16(uint16(l))
- case 2:
- e.w.writeUint32(uint32(l))
- default:
- e.w.writeUint64(uint64(l))
- }
- e.w.writestr(v)
- }
-}
-
-func (e *bincEncDriver) encodeStringBytes(c charEncoding, v []byte) {
- l := uint64(len(v))
- e.encBytesLen(c, l)
- if l > 0 {
- e.w.writeb(v)
- }
-}
-
-func (e *bincEncDriver) encBytesLen(c charEncoding, length uint64) {
- //TODO: support bincUnicodeOther (for now, just use string or bytearray)
- if c == c_RAW {
- e.encLen(bincVdByteArray<<4, length)
- } else {
- e.encLen(bincVdString<<4, length)
- }
-}
-
-func (e *bincEncDriver) encLen(bd byte, l uint64) {
- if l < 12 {
- e.w.writen1(bd | uint8(l+4))
- } else {
- e.encLenNumber(bd, l)
- }
-}
-
-func (e *bincEncDriver) encLenNumber(bd byte, v uint64) {
- switch {
- case v <= math.MaxUint8:
- e.w.writen2(bd, byte(v))
- case v <= math.MaxUint16:
- e.w.writen1(bd | 0x01)
- e.w.writeUint16(uint16(v))
- case v <= math.MaxUint32:
- e.w.writen1(bd | 0x02)
- e.w.writeUint32(uint32(v))
- default:
- e.w.writen1(bd | 0x03)
- e.w.writeUint64(uint64(v))
- }
-}
-
-//------------------------------------
-
-type bincDecDriver struct {
- r decReader
- bdRead bool
- bdType valueType
- bd byte
- vd byte
- vs byte
- b [8]byte
- m map[uint32]string // symbols (use uint32 as key, as map optimizes for it)
-}
-
-func (d *bincDecDriver) initReadNext() {
- if d.bdRead {
- return
- }
- d.bd = d.r.readn1()
- d.vd = d.bd >> 4
- d.vs = d.bd & 0x0f
- d.bdRead = true
- d.bdType = valueTypeUnset
-}
-
-func (d *bincDecDriver) currentEncodedType() valueType {
- if d.bdType == valueTypeUnset {
- switch d.vd {
- case bincVdSpecial:
- switch d.vs {
- case bincSpNil:
- d.bdType = valueTypeNil
- case bincSpFalse, bincSpTrue:
- d.bdType = valueTypeBool
- case bincSpNan, bincSpNegInf, bincSpPosInf, bincSpZeroFloat:
- d.bdType = valueTypeFloat
- case bincSpZero:
- d.bdType = valueTypeUint
- case bincSpNegOne:
- d.bdType = valueTypeInt
- default:
- decErr("currentEncodedType: Unrecognized special value 0x%x", d.vs)
- }
- case bincVdSmallInt:
- d.bdType = valueTypeUint
- case bincVdPosInt:
- d.bdType = valueTypeUint
- case bincVdNegInt:
- d.bdType = valueTypeInt
- case bincVdFloat:
- d.bdType = valueTypeFloat
- case bincVdString:
- d.bdType = valueTypeString
- case bincVdSymbol:
- d.bdType = valueTypeSymbol
- case bincVdByteArray:
- d.bdType = valueTypeBytes
- case bincVdTimestamp:
- d.bdType = valueTypeTimestamp
- case bincVdCustomExt:
- d.bdType = valueTypeExt
- case bincVdArray:
- d.bdType = valueTypeArray
- case bincVdMap:
- d.bdType = valueTypeMap
- default:
- decErr("currentEncodedType: Unrecognized d.vd: 0x%x", d.vd)
- }
- }
- return d.bdType
-}
-
-func (d *bincDecDriver) tryDecodeAsNil() bool {
- if d.bd == bincVdSpecial<<4|bincSpNil {
- d.bdRead = false
- return true
- }
- return false
-}
-
-func (d *bincDecDriver) isBuiltinType(rt uintptr) bool {
- return rt == timeTypId
-}
-
-func (d *bincDecDriver) decodeBuiltin(rt uintptr, v interface{}) {
- switch rt {
- case timeTypId:
- if d.vd != bincVdTimestamp {
- decErr("Invalid d.vd. Expecting 0x%x. Received: 0x%x", bincVdTimestamp, d.vd)
- }
- tt, err := decodeTime(d.r.readn(int(d.vs)))
- if err != nil {
- panic(err)
- }
- var vt *time.Time = v.(*time.Time)
- *vt = tt
- d.bdRead = false
- }
-}
-
-func (d *bincDecDriver) decFloatPre(vs, defaultLen byte) {
- if vs&0x8 == 0 {
- d.r.readb(d.b[0:defaultLen])
- } else {
- l := d.r.readn1()
- if l > 8 {
- decErr("At most 8 bytes used to represent float. Received: %v bytes", l)
- }
- for i := l; i < 8; i++ {
- d.b[i] = 0
- }
- d.r.readb(d.b[0:l])
- }
-}
-
-func (d *bincDecDriver) decFloat() (f float64) {
- //if true { f = math.Float64frombits(d.r.readUint64()); break; }
- switch vs := d.vs; vs & 0x7 {
- case bincFlBin32:
- d.decFloatPre(vs, 4)
- f = float64(math.Float32frombits(bigen.Uint32(d.b[0:4])))
- case bincFlBin64:
- d.decFloatPre(vs, 8)
- f = math.Float64frombits(bigen.Uint64(d.b[0:8]))
- default:
- decErr("only float32 and float64 are supported. d.vd: 0x%x, d.vs: 0x%x", d.vd, d.vs)
- }
- return
-}
-
-func (d *bincDecDriver) decUint() (v uint64) {
- // need to inline the code (interface conversion and type assertion expensive)
- switch d.vs {
- case 0:
- v = uint64(d.r.readn1())
- case 1:
- d.r.readb(d.b[6:])
- v = uint64(bigen.Uint16(d.b[6:]))
- case 2:
- d.b[4] = 0
- d.r.readb(d.b[5:])
- v = uint64(bigen.Uint32(d.b[4:]))
- case 3:
- d.r.readb(d.b[4:])
- v = uint64(bigen.Uint32(d.b[4:]))
- case 4, 5, 6:
- lim := int(7 - d.vs)
- d.r.readb(d.b[lim:])
- for i := 0; i < lim; i++ {
- d.b[i] = 0
- }
- v = uint64(bigen.Uint64(d.b[:]))
- case 7:
- d.r.readb(d.b[:])
- v = uint64(bigen.Uint64(d.b[:]))
- default:
- decErr("unsigned integers with greater than 64 bits of precision not supported")
- }
- return
-}
-
-func (d *bincDecDriver) decIntAny() (ui uint64, i int64, neg bool) {
- switch d.vd {
- case bincVdPosInt:
- ui = d.decUint()
- i = int64(ui)
- case bincVdNegInt:
- ui = d.decUint()
- i = -(int64(ui))
- neg = true
- case bincVdSmallInt:
- i = int64(d.vs) + 1
- ui = uint64(d.vs) + 1
- case bincVdSpecial:
- switch d.vs {
- case bincSpZero:
- //i = 0
- case bincSpNegOne:
- neg = true
- ui = 1
- i = -1
- default:
- decErr("numeric decode fails for special value: d.vs: 0x%x", d.vs)
- }
- default:
- decErr("number can only be decoded from uint or int values. d.bd: 0x%x, d.vd: 0x%x", d.bd, d.vd)
- }
- return
-}
-
-func (d *bincDecDriver) decodeInt(bitsize uint8) (i int64) {
- _, i, _ = d.decIntAny()
- checkOverflow(0, i, bitsize)
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decodeUint(bitsize uint8) (ui uint64) {
- ui, i, neg := d.decIntAny()
- if neg {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- checkOverflow(ui, 0, bitsize)
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decodeFloat(chkOverflow32 bool) (f float64) {
- switch d.vd {
- case bincVdSpecial:
- d.bdRead = false
- switch d.vs {
- case bincSpNan:
- return math.NaN()
- case bincSpPosInf:
- return math.Inf(1)
- case bincSpZeroFloat, bincSpZero:
- return
- case bincSpNegInf:
- return math.Inf(-1)
- default:
- decErr("Invalid d.vs decoding float where d.vd=bincVdSpecial: %v", d.vs)
- }
- case bincVdFloat:
- f = d.decFloat()
- default:
- _, i, _ := d.decIntAny()
- f = float64(i)
- }
- checkOverflowFloat32(f, chkOverflow32)
- d.bdRead = false
- return
-}
-
-// bool can be decoded from bool only (single byte).
-func (d *bincDecDriver) decodeBool() (b bool) {
- switch d.bd {
- case (bincVdSpecial | bincSpFalse):
- // b = false
- case (bincVdSpecial | bincSpTrue):
- b = true
- default:
- decErr("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
- }
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) readMapLen() (length int) {
- if d.vd != bincVdMap {
- decErr("Invalid d.vd for map. Expecting 0x%x. Got: 0x%x", bincVdMap, d.vd)
- }
- length = d.decLen()
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) readArrayLen() (length int) {
- if d.vd != bincVdArray {
- decErr("Invalid d.vd for array. Expecting 0x%x. Got: 0x%x", bincVdArray, d.vd)
- }
- length = d.decLen()
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decLen() int {
- if d.vs <= 3 {
- return int(d.decUint())
- }
- return int(d.vs - 4)
-}
-
-func (d *bincDecDriver) decodeString() (s string) {
- switch d.vd {
- case bincVdString, bincVdByteArray:
- if length := d.decLen(); length > 0 {
- s = string(d.r.readn(length))
- }
- case bincVdSymbol:
- //from vs: extract numSymbolBytes, containsStringVal, strLenPrecision,
- //extract symbol
- //if containsStringVal, read it and put in map
- //else look in map for string value
- var symbol uint32
- vs := d.vs
- //fmt.Printf(">>>> d.vs: 0b%b, & 0x8: %v, & 0x4: %v\n", d.vs, vs & 0x8, vs & 0x4)
- if vs&0x8 == 0 {
- symbol = uint32(d.r.readn1())
- } else {
- symbol = uint32(d.r.readUint16())
- }
- if d.m == nil {
- d.m = make(map[uint32]string, 16)
- }
-
- if vs&0x4 == 0 {
- s = d.m[symbol]
- } else {
- var slen int
- switch vs & 0x3 {
- case 0:
- slen = int(d.r.readn1())
- case 1:
- slen = int(d.r.readUint16())
- case 2:
- slen = int(d.r.readUint32())
- case 3:
- slen = int(d.r.readUint64())
- }
- s = string(d.r.readn(slen))
- d.m[symbol] = s
- }
- default:
- decErr("Invalid d.vd for string. Expecting string:0x%x, bytearray:0x%x or symbol: 0x%x. Got: 0x%x",
- bincVdString, bincVdByteArray, bincVdSymbol, d.vd)
- }
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decodeBytes(bs []byte) (bsOut []byte, changed bool) {
- var clen int
- switch d.vd {
- case bincVdString, bincVdByteArray:
- clen = d.decLen()
- default:
- decErr("Invalid d.vd for bytes. Expecting string:0x%x or bytearray:0x%x. Got: 0x%x",
- bincVdString, bincVdByteArray, d.vd)
- }
- if clen > 0 {
- // if no contents in stream, don't update the passed byteslice
- if len(bs) != clen {
- if len(bs) > clen {
- bs = bs[:clen]
- } else {
- bs = make([]byte, clen)
- }
- bsOut = bs
- changed = true
- }
- d.r.readb(bs)
- }
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
- switch d.vd {
- case bincVdCustomExt:
- l := d.decLen()
- xtag = d.r.readn1()
- if verifyTag && xtag != tag {
- decErr("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
- }
- xbs = d.r.readn(l)
- case bincVdByteArray:
- xbs, _ = d.decodeBytes(nil)
- default:
- decErr("Invalid d.vd for extensions (Expecting extensions or byte array). Got: 0x%x", d.vd)
- }
- d.bdRead = false
- return
-}
-
-func (d *bincDecDriver) decodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
- d.initReadNext()
-
- switch d.vd {
- case bincVdSpecial:
- switch d.vs {
- case bincSpNil:
- vt = valueTypeNil
- case bincSpFalse:
- vt = valueTypeBool
- v = false
- case bincSpTrue:
- vt = valueTypeBool
- v = true
- case bincSpNan:
- vt = valueTypeFloat
- v = math.NaN()
- case bincSpPosInf:
- vt = valueTypeFloat
- v = math.Inf(1)
- case bincSpNegInf:
- vt = valueTypeFloat
- v = math.Inf(-1)
- case bincSpZeroFloat:
- vt = valueTypeFloat
- v = float64(0)
- case bincSpZero:
- vt = valueTypeUint
- v = int64(0) // int8(0)
- case bincSpNegOne:
- vt = valueTypeInt
- v = int64(-1) // int8(-1)
- default:
- decErr("decodeNaked: Unrecognized special value 0x%x", d.vs)
- }
- case bincVdSmallInt:
- vt = valueTypeUint
- v = uint64(int8(d.vs)) + 1 // int8(d.vs) + 1
- case bincVdPosInt:
- vt = valueTypeUint
- v = d.decUint()
- case bincVdNegInt:
- vt = valueTypeInt
- v = -(int64(d.decUint()))
- case bincVdFloat:
- vt = valueTypeFloat
- v = d.decFloat()
- case bincVdSymbol:
- vt = valueTypeSymbol
- v = d.decodeString()
- case bincVdString:
- vt = valueTypeString
- v = d.decodeString()
- case bincVdByteArray:
- vt = valueTypeBytes
- v, _ = d.decodeBytes(nil)
- case bincVdTimestamp:
- vt = valueTypeTimestamp
- tt, err := decodeTime(d.r.readn(int(d.vs)))
- if err != nil {
- panic(err)
- }
- v = tt
- case bincVdCustomExt:
- vt = valueTypeExt
- l := d.decLen()
- var re RawExt
- re.Tag = d.r.readn1()
- re.Data = d.r.readn(l)
- v = &re
- vt = valueTypeExt
- case bincVdArray:
- vt = valueTypeArray
- decodeFurther = true
- case bincVdMap:
- vt = valueTypeMap
- decodeFurther = true
- default:
- decErr("decodeNaked: Unrecognized d.vd: 0x%x", d.vd)
- }
-
- if !decodeFurther {
- d.bdRead = false
- }
- return
-}
-
-//------------------------------------
-
-//BincHandle is a Handle for the Binc Schema-Free Encoding Format
-//defined at https://github.com/ugorji/binc .
-//
-//BincHandle currently supports all Binc features with the following EXCEPTIONS:
-// - only integers up to 64 bits of precision are supported.
-// big integers are unsupported.
-// - Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types).
-// extended precision and decimal IEEE 754 floats are unsupported.
-// - Only UTF-8 strings supported.
-// Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.
-//Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.
-type BincHandle struct {
- BasicHandle
-}
-
-func (h *BincHandle) newEncDriver(w encWriter) encDriver {
- return &bincEncDriver{w: w}
-}
-
-func (h *BincHandle) newDecDriver(r decReader) decDriver {
- return &bincDecDriver{r: r}
-}
-
-func (_ *BincHandle) writeExt() bool {
- return true
-}
-
-func (h *BincHandle) getBasicHandle() *BasicHandle {
- return &h.BasicHandle
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/decode.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/decode.go
deleted file mode 100644
index 87bef2b9..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/decode.go
+++ /dev/null
@@ -1,1048 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import (
- "io"
- "reflect"
- // "runtime/debug"
-)
-
-// Some tagging information for error messages.
-const (
- msgTagDec = "codec.decoder"
- msgBadDesc = "Unrecognized descriptor byte"
- msgDecCannotExpandArr = "cannot expand go array from %v to stream length: %v"
-)
-
-// decReader abstracts the reading source, allowing implementations that can
-// read from an io.Reader or directly off a byte slice with zero-copying.
-type decReader interface {
- readn(n int) []byte
- readb([]byte)
- readn1() uint8
- readUint16() uint16
- readUint32() uint32
- readUint64() uint64
-}
-
-type decDriver interface {
- initReadNext()
- tryDecodeAsNil() bool
- currentEncodedType() valueType
- isBuiltinType(rt uintptr) bool
- decodeBuiltin(rt uintptr, v interface{})
- //decodeNaked: Numbers are decoded as int64, uint64, float64 only (no smaller sized number types).
- decodeNaked() (v interface{}, vt valueType, decodeFurther bool)
- decodeInt(bitsize uint8) (i int64)
- decodeUint(bitsize uint8) (ui uint64)
- decodeFloat(chkOverflow32 bool) (f float64)
- decodeBool() (b bool)
- // decodeString can also decode symbols
- decodeString() (s string)
- decodeBytes(bs []byte) (bsOut []byte, changed bool)
- decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []byte)
- readMapLen() int
- readArrayLen() int
-}
-
-type DecodeOptions struct {
- // An instance of MapType is used during schema-less decoding of a map in the stream.
- // If nil, we use map[interface{}]interface{}
- MapType reflect.Type
- // An instance of SliceType is used during schema-less decoding of an array in the stream.
- // If nil, we use []interface{}
- SliceType reflect.Type
- // ErrorIfNoField controls whether an error is returned when decoding a map
- // from a codec stream into a struct, and no matching struct field is found.
- ErrorIfNoField bool
-}
-
-// ------------------------------------
-
-// ioDecReader is a decReader that reads off an io.Reader
-type ioDecReader struct {
- r io.Reader
- br io.ByteReader
- x [8]byte //temp byte array re-used internally for efficiency
-}
-
-func (z *ioDecReader) readn(n int) (bs []byte) {
- if n <= 0 {
- return
- }
- bs = make([]byte, n)
- if _, err := io.ReadAtLeast(z.r, bs, n); err != nil {
- panic(err)
- }
- return
-}
-
-func (z *ioDecReader) readb(bs []byte) {
- if _, err := io.ReadAtLeast(z.r, bs, len(bs)); err != nil {
- panic(err)
- }
-}
-
-func (z *ioDecReader) readn1() uint8 {
- if z.br != nil {
- b, err := z.br.ReadByte()
- if err != nil {
- panic(err)
- }
- return b
- }
- z.readb(z.x[:1])
- return z.x[0]
-}
-
-func (z *ioDecReader) readUint16() uint16 {
- z.readb(z.x[:2])
- return bigen.Uint16(z.x[:2])
-}
-
-func (z *ioDecReader) readUint32() uint32 {
- z.readb(z.x[:4])
- return bigen.Uint32(z.x[:4])
-}
-
-func (z *ioDecReader) readUint64() uint64 {
- z.readb(z.x[:8])
- return bigen.Uint64(z.x[:8])
-}
-
-// ------------------------------------
-
-// bytesDecReader is a decReader that reads off a byte slice with zero copying
-type bytesDecReader struct {
- b []byte // data
- c int // cursor
- a int // available
-}
-
-func (z *bytesDecReader) consume(n int) (oldcursor int) {
- if z.a == 0 {
- panic(io.EOF)
- }
- if n > z.a {
- decErr("Trying to read %v bytes. Only %v available", n, z.a)
- }
- // z.checkAvailable(n)
- oldcursor = z.c
- z.c = oldcursor + n
- z.a = z.a - n
- return
-}
-
-func (z *bytesDecReader) readn(n int) (bs []byte) {
- if n <= 0 {
- return
- }
- c0 := z.consume(n)
- bs = z.b[c0:z.c]
- return
-}
-
-func (z *bytesDecReader) readb(bs []byte) {
- copy(bs, z.readn(len(bs)))
-}
-
-func (z *bytesDecReader) readn1() uint8 {
- c0 := z.consume(1)
- return z.b[c0]
-}
-
-// Use binaryEncoding helper for 4 and 8 bits, but inline it for 2 bits
-// creating temp slice variable and copying it to helper function is expensive
-// for just 2 bits.
-
-func (z *bytesDecReader) readUint16() uint16 {
- c0 := z.consume(2)
- return uint16(z.b[c0+1]) | uint16(z.b[c0])<<8
-}
-
-func (z *bytesDecReader) readUint32() uint32 {
- c0 := z.consume(4)
- return bigen.Uint32(z.b[c0:z.c])
-}
-
-func (z *bytesDecReader) readUint64() uint64 {
- c0 := z.consume(8)
- return bigen.Uint64(z.b[c0:z.c])
-}
-
-// ------------------------------------
-
-// decFnInfo has methods for registering handling decoding of a specific type
-// based on some characteristics (builtin, extension, reflect Kind, etc)
-type decFnInfo struct {
- ti *typeInfo
- d *Decoder
- dd decDriver
- xfFn func(reflect.Value, []byte) error
- xfTag byte
- array bool
-}
-
-func (f *decFnInfo) builtin(rv reflect.Value) {
- f.dd.decodeBuiltin(f.ti.rtid, rv.Addr().Interface())
-}
-
-func (f *decFnInfo) rawExt(rv reflect.Value) {
- xtag, xbs := f.dd.decodeExt(false, 0)
- rv.Field(0).SetUint(uint64(xtag))
- rv.Field(1).SetBytes(xbs)
-}
-
-func (f *decFnInfo) ext(rv reflect.Value) {
- _, xbs := f.dd.decodeExt(true, f.xfTag)
- if fnerr := f.xfFn(rv, xbs); fnerr != nil {
- panic(fnerr)
- }
-}
-
-func (f *decFnInfo) binaryMarshal(rv reflect.Value) {
- var bm binaryUnmarshaler
- if f.ti.unmIndir == -1 {
- bm = rv.Addr().Interface().(binaryUnmarshaler)
- } else if f.ti.unmIndir == 0 {
- bm = rv.Interface().(binaryUnmarshaler)
- } else {
- for j, k := int8(0), f.ti.unmIndir; j < k; j++ {
- if rv.IsNil() {
- rv.Set(reflect.New(rv.Type().Elem()))
- }
- rv = rv.Elem()
- }
- bm = rv.Interface().(binaryUnmarshaler)
- }
- xbs, _ := f.dd.decodeBytes(nil)
- if fnerr := bm.UnmarshalBinary(xbs); fnerr != nil {
- panic(fnerr)
- }
-}
-
-func (f *decFnInfo) kErr(rv reflect.Value) {
- decErr("Unhandled value for kind: %v: %s", rv.Kind(), msgBadDesc)
-}
-
-func (f *decFnInfo) kString(rv reflect.Value) {
- rv.SetString(f.dd.decodeString())
-}
-
-func (f *decFnInfo) kBool(rv reflect.Value) {
- rv.SetBool(f.dd.decodeBool())
-}
-
-func (f *decFnInfo) kInt(rv reflect.Value) {
- rv.SetInt(f.dd.decodeInt(intBitsize))
-}
-
-func (f *decFnInfo) kInt64(rv reflect.Value) {
- rv.SetInt(f.dd.decodeInt(64))
-}
-
-func (f *decFnInfo) kInt32(rv reflect.Value) {
- rv.SetInt(f.dd.decodeInt(32))
-}
-
-func (f *decFnInfo) kInt8(rv reflect.Value) {
- rv.SetInt(f.dd.decodeInt(8))
-}
-
-func (f *decFnInfo) kInt16(rv reflect.Value) {
- rv.SetInt(f.dd.decodeInt(16))
-}
-
-func (f *decFnInfo) kFloat32(rv reflect.Value) {
- rv.SetFloat(f.dd.decodeFloat(true))
-}
-
-func (f *decFnInfo) kFloat64(rv reflect.Value) {
- rv.SetFloat(f.dd.decodeFloat(false))
-}
-
-func (f *decFnInfo) kUint8(rv reflect.Value) {
- rv.SetUint(f.dd.decodeUint(8))
-}
-
-func (f *decFnInfo) kUint64(rv reflect.Value) {
- rv.SetUint(f.dd.decodeUint(64))
-}
-
-func (f *decFnInfo) kUint(rv reflect.Value) {
- rv.SetUint(f.dd.decodeUint(uintBitsize))
-}
-
-func (f *decFnInfo) kUint32(rv reflect.Value) {
- rv.SetUint(f.dd.decodeUint(32))
-}
-
-func (f *decFnInfo) kUint16(rv reflect.Value) {
- rv.SetUint(f.dd.decodeUint(16))
-}
-
-// func (f *decFnInfo) kPtr(rv reflect.Value) {
-// debugf(">>>>>>> ??? decode kPtr called - shouldn't get called")
-// if rv.IsNil() {
-// rv.Set(reflect.New(rv.Type().Elem()))
-// }
-// f.d.decodeValue(rv.Elem())
-// }
-
-func (f *decFnInfo) kInterface(rv reflect.Value) {
- // debugf("\t===> kInterface")
- if !rv.IsNil() {
- f.d.decodeValue(rv.Elem())
- return
- }
- // nil interface:
- // use some hieristics to set the nil interface to an
- // appropriate value based on the first byte read (byte descriptor bd)
- v, vt, decodeFurther := f.dd.decodeNaked()
- if vt == valueTypeNil {
- return
- }
- // Cannot decode into nil interface with methods (e.g. error, io.Reader, etc)
- // if non-nil value in stream.
- if num := f.ti.rt.NumMethod(); num > 0 {
- decErr("decodeValue: Cannot decode non-nil codec value into nil %v (%v methods)",
- f.ti.rt, num)
- }
- var rvn reflect.Value
- var useRvn bool
- switch vt {
- case valueTypeMap:
- if f.d.h.MapType == nil {
- var m2 map[interface{}]interface{}
- v = &m2
- } else {
- rvn = reflect.New(f.d.h.MapType).Elem()
- useRvn = true
- }
- case valueTypeArray:
- if f.d.h.SliceType == nil {
- var m2 []interface{}
- v = &m2
- } else {
- rvn = reflect.New(f.d.h.SliceType).Elem()
- useRvn = true
- }
- case valueTypeExt:
- re := v.(*RawExt)
- var bfn func(reflect.Value, []byte) error
- rvn, bfn = f.d.h.getDecodeExtForTag(re.Tag)
- if bfn == nil {
- rvn = reflect.ValueOf(*re)
- } else if fnerr := bfn(rvn, re.Data); fnerr != nil {
- panic(fnerr)
- }
- rv.Set(rvn)
- return
- }
- if decodeFurther {
- if useRvn {
- f.d.decodeValue(rvn)
- } else if v != nil {
- // this v is a pointer, so we need to dereference it when done
- f.d.decode(v)
- rvn = reflect.ValueOf(v).Elem()
- useRvn = true
- }
- }
- if useRvn {
- rv.Set(rvn)
- } else if v != nil {
- rv.Set(reflect.ValueOf(v))
- }
-}
-
-func (f *decFnInfo) kStruct(rv reflect.Value) {
- fti := f.ti
- if currEncodedType := f.dd.currentEncodedType(); currEncodedType == valueTypeMap {
- containerLen := f.dd.readMapLen()
- if containerLen == 0 {
- return
- }
- tisfi := fti.sfi
- for j := 0; j < containerLen; j++ {
- // var rvkencname string
- // ddecode(&rvkencname)
- f.dd.initReadNext()
- rvkencname := f.dd.decodeString()
- // rvksi := ti.getForEncName(rvkencname)
- if k := fti.indexForEncName(rvkencname); k > -1 {
- sfik := tisfi[k]
- if sfik.i != -1 {
- f.d.decodeValue(rv.Field(int(sfik.i)))
- } else {
- f.d.decEmbeddedField(rv, sfik.is)
- }
- // f.d.decodeValue(ti.field(k, rv))
- } else {
- if f.d.h.ErrorIfNoField {
- decErr("No matching struct field found when decoding stream map with key: %v",
- rvkencname)
- } else {
- var nilintf0 interface{}
- f.d.decodeValue(reflect.ValueOf(&nilintf0).Elem())
- }
- }
- }
- } else if currEncodedType == valueTypeArray {
- containerLen := f.dd.readArrayLen()
- if containerLen == 0 {
- return
- }
- for j, si := range fti.sfip {
- if j == containerLen {
- break
- }
- if si.i != -1 {
- f.d.decodeValue(rv.Field(int(si.i)))
- } else {
- f.d.decEmbeddedField(rv, si.is)
- }
- }
- if containerLen > len(fti.sfip) {
- // read remaining values and throw away
- for j := len(fti.sfip); j < containerLen; j++ {
- var nilintf0 interface{}
- f.d.decodeValue(reflect.ValueOf(&nilintf0).Elem())
- }
- }
- } else {
- decErr("Only encoded map or array can be decoded into a struct. (valueType: %x)",
- currEncodedType)
- }
-}
-
-func (f *decFnInfo) kSlice(rv reflect.Value) {
- // A slice can be set from a map or array in stream.
- currEncodedType := f.dd.currentEncodedType()
-
- switch currEncodedType {
- case valueTypeBytes, valueTypeString:
- if f.ti.rtid == uint8SliceTypId || f.ti.rt.Elem().Kind() == reflect.Uint8 {
- if bs2, changed2 := f.dd.decodeBytes(rv.Bytes()); changed2 {
- rv.SetBytes(bs2)
- }
- return
- }
- }
-
- if shortCircuitReflectToFastPath && rv.CanAddr() {
- switch f.ti.rtid {
- case intfSliceTypId:
- f.d.decSliceIntf(rv.Addr().Interface().(*[]interface{}), currEncodedType, f.array)
- return
- case uint64SliceTypId:
- f.d.decSliceUint64(rv.Addr().Interface().(*[]uint64), currEncodedType, f.array)
- return
- case int64SliceTypId:
- f.d.decSliceInt64(rv.Addr().Interface().(*[]int64), currEncodedType, f.array)
- return
- case strSliceTypId:
- f.d.decSliceStr(rv.Addr().Interface().(*[]string), currEncodedType, f.array)
- return
- }
- }
-
- containerLen, containerLenS := decContLens(f.dd, currEncodedType)
-
- // an array can never return a nil slice. so no need to check f.array here.
-
- if rv.IsNil() {
- rv.Set(reflect.MakeSlice(f.ti.rt, containerLenS, containerLenS))
- }
-
- if containerLen == 0 {
- return
- }
-
- if rvcap, rvlen := rv.Len(), rv.Cap(); containerLenS > rvcap {
- if f.array { // !rv.CanSet()
- decErr(msgDecCannotExpandArr, rvcap, containerLenS)
- }
- rvn := reflect.MakeSlice(f.ti.rt, containerLenS, containerLenS)
- if rvlen > 0 {
- reflect.Copy(rvn, rv)
- }
- rv.Set(rvn)
- } else if containerLenS > rvlen {
- rv.SetLen(containerLenS)
- }
-
- for j := 0; j < containerLenS; j++ {
- f.d.decodeValue(rv.Index(j))
- }
-}
-
-func (f *decFnInfo) kArray(rv reflect.Value) {
- // f.d.decodeValue(rv.Slice(0, rv.Len()))
- f.kSlice(rv.Slice(0, rv.Len()))
-}
-
-func (f *decFnInfo) kMap(rv reflect.Value) {
- if shortCircuitReflectToFastPath && rv.CanAddr() {
- switch f.ti.rtid {
- case mapStrIntfTypId:
- f.d.decMapStrIntf(rv.Addr().Interface().(*map[string]interface{}))
- return
- case mapIntfIntfTypId:
- f.d.decMapIntfIntf(rv.Addr().Interface().(*map[interface{}]interface{}))
- return
- case mapInt64IntfTypId:
- f.d.decMapInt64Intf(rv.Addr().Interface().(*map[int64]interface{}))
- return
- case mapUint64IntfTypId:
- f.d.decMapUint64Intf(rv.Addr().Interface().(*map[uint64]interface{}))
- return
- }
- }
-
- containerLen := f.dd.readMapLen()
-
- if rv.IsNil() {
- rv.Set(reflect.MakeMap(f.ti.rt))
- }
-
- if containerLen == 0 {
- return
- }
-
- ktype, vtype := f.ti.rt.Key(), f.ti.rt.Elem()
- ktypeId := reflect.ValueOf(ktype).Pointer()
- for j := 0; j < containerLen; j++ {
- rvk := reflect.New(ktype).Elem()
- f.d.decodeValue(rvk)
-
- // special case if a byte array.
- // if ktype == intfTyp {
- if ktypeId == intfTypId {
- rvk = rvk.Elem()
- if rvk.Type() == uint8SliceTyp {
- rvk = reflect.ValueOf(string(rvk.Bytes()))
- }
- }
- rvv := rv.MapIndex(rvk)
- if !rvv.IsValid() {
- rvv = reflect.New(vtype).Elem()
- }
-
- f.d.decodeValue(rvv)
- rv.SetMapIndex(rvk, rvv)
- }
-}
-
-// ----------------------------------------
-
-type decFn struct {
- i *decFnInfo
- f func(*decFnInfo, reflect.Value)
-}
-
-// A Decoder reads and decodes an object from an input stream in the codec format.
-type Decoder struct {
- r decReader
- d decDriver
- h *BasicHandle
- f map[uintptr]decFn
- x []uintptr
- s []decFn
-}
-
-// NewDecoder returns a Decoder for decoding a stream of bytes from an io.Reader.
-//
-// For efficiency, Users are encouraged to pass in a memory buffered writer
-// (eg bufio.Reader, bytes.Buffer).
-func NewDecoder(r io.Reader, h Handle) *Decoder {
- z := ioDecReader{
- r: r,
- }
- z.br, _ = r.(io.ByteReader)
- return &Decoder{r: &z, d: h.newDecDriver(&z), h: h.getBasicHandle()}
-}
-
-// NewDecoderBytes returns a Decoder which efficiently decodes directly
-// from a byte slice with zero copying.
-func NewDecoderBytes(in []byte, h Handle) *Decoder {
- z := bytesDecReader{
- b: in,
- a: len(in),
- }
- return &Decoder{r: &z, d: h.newDecDriver(&z), h: h.getBasicHandle()}
-}
-
-// Decode decodes the stream from reader and stores the result in the
-// value pointed to by v. v cannot be a nil pointer. v can also be
-// a reflect.Value of a pointer.
-//
-// Note that a pointer to a nil interface is not a nil pointer.
-// If you do not know what type of stream it is, pass in a pointer to a nil interface.
-// We will decode and store a value in that nil interface.
-//
-// Sample usages:
-// // Decoding into a non-nil typed value
-// var f float32
-// err = codec.NewDecoder(r, handle).Decode(&f)
-//
-// // Decoding into nil interface
-// var v interface{}
-// dec := codec.NewDecoder(r, handle)
-// err = dec.Decode(&v)
-//
-// When decoding into a nil interface{}, we will decode into an appropriate value based
-// on the contents of the stream:
-// - Numbers are decoded as float64, int64 or uint64.
-// - Other values are decoded appropriately depending on the type:
-// bool, string, []byte, time.Time, etc
-// - Extensions are decoded as RawExt (if no ext function registered for the tag)
-// Configurations exist on the Handle to override defaults
-// (e.g. for MapType, SliceType and how to decode raw bytes).
-//
-// When decoding into a non-nil interface{} value, the mode of encoding is based on the
-// type of the value. When a value is seen:
-// - If an extension is registered for it, call that extension function
-// - If it implements BinaryUnmarshaler, call its UnmarshalBinary(data []byte) error
-// - Else decode it based on its reflect.Kind
-//
-// There are some special rules when decoding into containers (slice/array/map/struct).
-// Decode will typically use the stream contents to UPDATE the container.
-// - A map can be decoded from a stream map, by updating matching keys.
-// - A slice can be decoded from a stream array,
-// by updating the first n elements, where n is length of the stream.
-// - A slice can be decoded from a stream map, by decoding as if
-// it contains a sequence of key-value pairs.
-// - A struct can be decoded from a stream map, by updating matching fields.
-// - A struct can be decoded from a stream array,
-// by updating fields as they occur in the struct (by index).
-//
-// When decoding a stream map or array with length of 0 into a nil map or slice,
-// we reset the destination map or slice to a zero-length value.
-//
-// However, when decoding a stream nil, we reset the destination container
-// to its "zero" value (e.g. nil for slice/map, etc).
-//
-func (d *Decoder) Decode(v interface{}) (err error) {
- defer panicToErr(&err)
- d.decode(v)
- return
-}
-
-func (d *Decoder) decode(iv interface{}) {
- d.d.initReadNext()
-
- switch v := iv.(type) {
- case nil:
- decErr("Cannot decode into nil.")
-
- case reflect.Value:
- d.chkPtrValue(v)
- d.decodeValue(v.Elem())
-
- case *string:
- *v = d.d.decodeString()
- case *bool:
- *v = d.d.decodeBool()
- case *int:
- *v = int(d.d.decodeInt(intBitsize))
- case *int8:
- *v = int8(d.d.decodeInt(8))
- case *int16:
- *v = int16(d.d.decodeInt(16))
- case *int32:
- *v = int32(d.d.decodeInt(32))
- case *int64:
- *v = d.d.decodeInt(64)
- case *uint:
- *v = uint(d.d.decodeUint(uintBitsize))
- case *uint8:
- *v = uint8(d.d.decodeUint(8))
- case *uint16:
- *v = uint16(d.d.decodeUint(16))
- case *uint32:
- *v = uint32(d.d.decodeUint(32))
- case *uint64:
- *v = d.d.decodeUint(64)
- case *float32:
- *v = float32(d.d.decodeFloat(true))
- case *float64:
- *v = d.d.decodeFloat(false)
- case *[]byte:
- *v, _ = d.d.decodeBytes(*v)
-
- case *[]interface{}:
- d.decSliceIntf(v, valueTypeInvalid, false)
- case *[]uint64:
- d.decSliceUint64(v, valueTypeInvalid, false)
- case *[]int64:
- d.decSliceInt64(v, valueTypeInvalid, false)
- case *[]string:
- d.decSliceStr(v, valueTypeInvalid, false)
- case *map[string]interface{}:
- d.decMapStrIntf(v)
- case *map[interface{}]interface{}:
- d.decMapIntfIntf(v)
- case *map[uint64]interface{}:
- d.decMapUint64Intf(v)
- case *map[int64]interface{}:
- d.decMapInt64Intf(v)
-
- case *interface{}:
- d.decodeValue(reflect.ValueOf(iv).Elem())
-
- default:
- rv := reflect.ValueOf(iv)
- d.chkPtrValue(rv)
- d.decodeValue(rv.Elem())
- }
-}
-
-func (d *Decoder) decodeValue(rv reflect.Value) {
- d.d.initReadNext()
-
- if d.d.tryDecodeAsNil() {
- // If value in stream is nil, set the dereferenced value to its "zero" value (if settable)
- if rv.Kind() == reflect.Ptr {
- if !rv.IsNil() {
- rv.Set(reflect.Zero(rv.Type()))
- }
- return
- }
- // for rv.Kind() == reflect.Ptr {
- // rv = rv.Elem()
- // }
- if rv.IsValid() { // rv.CanSet() // always settable, except it's invalid
- rv.Set(reflect.Zero(rv.Type()))
- }
- return
- }
-
- // If stream is not containing a nil value, then we can deref to the base
- // non-pointer value, and decode into that.
- for rv.Kind() == reflect.Ptr {
- if rv.IsNil() {
- rv.Set(reflect.New(rv.Type().Elem()))
- }
- rv = rv.Elem()
- }
-
- rt := rv.Type()
- rtid := reflect.ValueOf(rt).Pointer()
-
- // retrieve or register a focus'ed function for this type
- // to eliminate need to do the retrieval multiple times
-
- // if d.f == nil && d.s == nil { debugf("---->Creating new dec f map for type: %v\n", rt) }
- var fn decFn
- var ok bool
- if useMapForCodecCache {
- fn, ok = d.f[rtid]
- } else {
- for i, v := range d.x {
- if v == rtid {
- fn, ok = d.s[i], true
- break
- }
- }
- }
- if !ok {
- // debugf("\tCreating new dec fn for type: %v\n", rt)
- fi := decFnInfo{ti: getTypeInfo(rtid, rt), d: d, dd: d.d}
- fn.i = &fi
- // An extension can be registered for any type, regardless of the Kind
- // (e.g. type BitSet int64, type MyStruct { / * unexported fields * / }, type X []int, etc.
- //
- // We can't check if it's an extension byte here first, because the user may have
- // registered a pointer or non-pointer type, meaning we may have to recurse first
- // before matching a mapped type, even though the extension byte is already detected.
- //
- // NOTE: if decoding into a nil interface{}, we return a non-nil
- // value except even if the container registers a length of 0.
- if rtid == rawExtTypId {
- fn.f = (*decFnInfo).rawExt
- } else if d.d.isBuiltinType(rtid) {
- fn.f = (*decFnInfo).builtin
- } else if xfTag, xfFn := d.h.getDecodeExt(rtid); xfFn != nil {
- fi.xfTag, fi.xfFn = xfTag, xfFn
- fn.f = (*decFnInfo).ext
- } else if supportBinaryMarshal && fi.ti.unm {
- fn.f = (*decFnInfo).binaryMarshal
- } else {
- switch rk := rt.Kind(); rk {
- case reflect.String:
- fn.f = (*decFnInfo).kString
- case reflect.Bool:
- fn.f = (*decFnInfo).kBool
- case reflect.Int:
- fn.f = (*decFnInfo).kInt
- case reflect.Int64:
- fn.f = (*decFnInfo).kInt64
- case reflect.Int32:
- fn.f = (*decFnInfo).kInt32
- case reflect.Int8:
- fn.f = (*decFnInfo).kInt8
- case reflect.Int16:
- fn.f = (*decFnInfo).kInt16
- case reflect.Float32:
- fn.f = (*decFnInfo).kFloat32
- case reflect.Float64:
- fn.f = (*decFnInfo).kFloat64
- case reflect.Uint8:
- fn.f = (*decFnInfo).kUint8
- case reflect.Uint64:
- fn.f = (*decFnInfo).kUint64
- case reflect.Uint:
- fn.f = (*decFnInfo).kUint
- case reflect.Uint32:
- fn.f = (*decFnInfo).kUint32
- case reflect.Uint16:
- fn.f = (*decFnInfo).kUint16
- // case reflect.Ptr:
- // fn.f = (*decFnInfo).kPtr
- case reflect.Interface:
- fn.f = (*decFnInfo).kInterface
- case reflect.Struct:
- fn.f = (*decFnInfo).kStruct
- case reflect.Slice:
- fn.f = (*decFnInfo).kSlice
- case reflect.Array:
- fi.array = true
- fn.f = (*decFnInfo).kArray
- case reflect.Map:
- fn.f = (*decFnInfo).kMap
- default:
- fn.f = (*decFnInfo).kErr
- }
- }
- if useMapForCodecCache {
- if d.f == nil {
- d.f = make(map[uintptr]decFn, 16)
- }
- d.f[rtid] = fn
- } else {
- d.s = append(d.s, fn)
- d.x = append(d.x, rtid)
- }
- }
-
- fn.f(fn.i, rv)
-
- return
-}
-
-func (d *Decoder) chkPtrValue(rv reflect.Value) {
- // We can only decode into a non-nil pointer
- if rv.Kind() == reflect.Ptr && !rv.IsNil() {
- return
- }
- if !rv.IsValid() {
- decErr("Cannot decode into a zero (ie invalid) reflect.Value")
- }
- if !rv.CanInterface() {
- decErr("Cannot decode into a value without an interface: %v", rv)
- }
- rvi := rv.Interface()
- decErr("Cannot decode into non-pointer or nil pointer. Got: %v, %T, %v",
- rv.Kind(), rvi, rvi)
-}
-
-func (d *Decoder) decEmbeddedField(rv reflect.Value, index []int) {
- // d.decodeValue(rv.FieldByIndex(index))
- // nil pointers may be here; so reproduce FieldByIndex logic + enhancements
- for _, j := range index {
- if rv.Kind() == reflect.Ptr {
- if rv.IsNil() {
- rv.Set(reflect.New(rv.Type().Elem()))
- }
- // If a pointer, it must be a pointer to struct (based on typeInfo contract)
- rv = rv.Elem()
- }
- rv = rv.Field(j)
- }
- d.decodeValue(rv)
-}
-
-// --------------------------------------------------
-
-// short circuit functions for common maps and slices
-
-func (d *Decoder) decSliceIntf(v *[]interface{}, currEncodedType valueType, doNotReset bool) {
- _, containerLenS := decContLens(d.d, currEncodedType)
- s := *v
- if s == nil {
- s = make([]interface{}, containerLenS, containerLenS)
- } else if containerLenS > cap(s) {
- if doNotReset {
- decErr(msgDecCannotExpandArr, cap(s), containerLenS)
- }
- s = make([]interface{}, containerLenS, containerLenS)
- copy(s, *v)
- } else if containerLenS > len(s) {
- s = s[:containerLenS]
- }
- for j := 0; j < containerLenS; j++ {
- d.decode(&s[j])
- }
- *v = s
-}
-
-func (d *Decoder) decSliceInt64(v *[]int64, currEncodedType valueType, doNotReset bool) {
- _, containerLenS := decContLens(d.d, currEncodedType)
- s := *v
- if s == nil {
- s = make([]int64, containerLenS, containerLenS)
- } else if containerLenS > cap(s) {
- if doNotReset {
- decErr(msgDecCannotExpandArr, cap(s), containerLenS)
- }
- s = make([]int64, containerLenS, containerLenS)
- copy(s, *v)
- } else if containerLenS > len(s) {
- s = s[:containerLenS]
- }
- for j := 0; j < containerLenS; j++ {
- // d.decode(&s[j])
- d.d.initReadNext()
- s[j] = d.d.decodeInt(intBitsize)
- }
- *v = s
-}
-
-func (d *Decoder) decSliceUint64(v *[]uint64, currEncodedType valueType, doNotReset bool) {
- _, containerLenS := decContLens(d.d, currEncodedType)
- s := *v
- if s == nil {
- s = make([]uint64, containerLenS, containerLenS)
- } else if containerLenS > cap(s) {
- if doNotReset {
- decErr(msgDecCannotExpandArr, cap(s), containerLenS)
- }
- s = make([]uint64, containerLenS, containerLenS)
- copy(s, *v)
- } else if containerLenS > len(s) {
- s = s[:containerLenS]
- }
- for j := 0; j < containerLenS; j++ {
- // d.decode(&s[j])
- d.d.initReadNext()
- s[j] = d.d.decodeUint(intBitsize)
- }
- *v = s
-}
-
-func (d *Decoder) decSliceStr(v *[]string, currEncodedType valueType, doNotReset bool) {
- _, containerLenS := decContLens(d.d, currEncodedType)
- s := *v
- if s == nil {
- s = make([]string, containerLenS, containerLenS)
- } else if containerLenS > cap(s) {
- if doNotReset {
- decErr(msgDecCannotExpandArr, cap(s), containerLenS)
- }
- s = make([]string, containerLenS, containerLenS)
- copy(s, *v)
- } else if containerLenS > len(s) {
- s = s[:containerLenS]
- }
- for j := 0; j < containerLenS; j++ {
- // d.decode(&s[j])
- d.d.initReadNext()
- s[j] = d.d.decodeString()
- }
- *v = s
-}
-
-func (d *Decoder) decMapIntfIntf(v *map[interface{}]interface{}) {
- containerLen := d.d.readMapLen()
- m := *v
- if m == nil {
- m = make(map[interface{}]interface{}, containerLen)
- *v = m
- }
- for j := 0; j < containerLen; j++ {
- var mk interface{}
- d.decode(&mk)
- // special case if a byte array.
- if bv, bok := mk.([]byte); bok {
- mk = string(bv)
- }
- mv := m[mk]
- d.decode(&mv)
- m[mk] = mv
- }
-}
-
-func (d *Decoder) decMapInt64Intf(v *map[int64]interface{}) {
- containerLen := d.d.readMapLen()
- m := *v
- if m == nil {
- m = make(map[int64]interface{}, containerLen)
- *v = m
- }
- for j := 0; j < containerLen; j++ {
- d.d.initReadNext()
- mk := d.d.decodeInt(intBitsize)
- mv := m[mk]
- d.decode(&mv)
- m[mk] = mv
- }
-}
-
-func (d *Decoder) decMapUint64Intf(v *map[uint64]interface{}) {
- containerLen := d.d.readMapLen()
- m := *v
- if m == nil {
- m = make(map[uint64]interface{}, containerLen)
- *v = m
- }
- for j := 0; j < containerLen; j++ {
- d.d.initReadNext()
- mk := d.d.decodeUint(intBitsize)
- mv := m[mk]
- d.decode(&mv)
- m[mk] = mv
- }
-}
-
-func (d *Decoder) decMapStrIntf(v *map[string]interface{}) {
- containerLen := d.d.readMapLen()
- m := *v
- if m == nil {
- m = make(map[string]interface{}, containerLen)
- *v = m
- }
- for j := 0; j < containerLen; j++ {
- d.d.initReadNext()
- mk := d.d.decodeString()
- mv := m[mk]
- d.decode(&mv)
- m[mk] = mv
- }
-}
-
-// ----------------------------------------
-
-func decContLens(dd decDriver, currEncodedType valueType) (containerLen, containerLenS int) {
- if currEncodedType == valueTypeInvalid {
- currEncodedType = dd.currentEncodedType()
- }
- switch currEncodedType {
- case valueTypeArray:
- containerLen = dd.readArrayLen()
- containerLenS = containerLen
- case valueTypeMap:
- containerLen = dd.readMapLen()
- containerLenS = containerLen * 2
- default:
- decErr("Only encoded map or array can be decoded into a slice. (valueType: %0x)",
- currEncodedType)
- }
- return
-}
-
-func decErr(format string, params ...interface{}) {
- doPanic(msgTagDec, format, params...)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/encode.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/encode.go
deleted file mode 100644
index 4914be0c..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/encode.go
+++ /dev/null
@@ -1,1001 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import (
- "io"
- "reflect"
-)
-
-const (
- // Some tagging information for error messages.
- msgTagEnc = "codec.encoder"
- defEncByteBufSize = 1 << 6 // 4:16, 6:64, 8:256, 10:1024
- // maxTimeSecs32 = math.MaxInt32 / 60 / 24 / 366
-)
-
-// AsSymbolFlag defines what should be encoded as symbols.
-type AsSymbolFlag uint8
-
-const (
- // AsSymbolDefault is default.
- // Currently, this means only encode struct field names as symbols.
- // The default is subject to change.
- AsSymbolDefault AsSymbolFlag = iota
-
- // AsSymbolAll means encode anything which could be a symbol as a symbol.
- AsSymbolAll = 0xfe
-
- // AsSymbolNone means do not encode anything as a symbol.
- AsSymbolNone = 1 << iota
-
- // AsSymbolMapStringKeys means encode keys in map[string]XXX as symbols.
- AsSymbolMapStringKeysFlag
-
- // AsSymbolStructFieldName means encode struct field names as symbols.
- AsSymbolStructFieldNameFlag
-)
-
-// encWriter abstracting writing to a byte array or to an io.Writer.
-type encWriter interface {
- writeUint16(uint16)
- writeUint32(uint32)
- writeUint64(uint64)
- writeb([]byte)
- writestr(string)
- writen1(byte)
- writen2(byte, byte)
- atEndOfEncode()
-}
-
-// encDriver abstracts the actual codec (binc vs msgpack, etc)
-type encDriver interface {
- isBuiltinType(rt uintptr) bool
- encodeBuiltin(rt uintptr, v interface{})
- encodeNil()
- encodeInt(i int64)
- encodeUint(i uint64)
- encodeBool(b bool)
- encodeFloat32(f float32)
- encodeFloat64(f float64)
- encodeExtPreamble(xtag byte, length int)
- encodeArrayPreamble(length int)
- encodeMapPreamble(length int)
- encodeString(c charEncoding, v string)
- encodeSymbol(v string)
- encodeStringBytes(c charEncoding, v []byte)
- //TODO
- //encBignum(f *big.Int)
- //encStringRunes(c charEncoding, v []rune)
-}
-
-type ioEncWriterWriter interface {
- WriteByte(c byte) error
- WriteString(s string) (n int, err error)
- Write(p []byte) (n int, err error)
-}
-
-type ioEncStringWriter interface {
- WriteString(s string) (n int, err error)
-}
-
-type EncodeOptions struct {
- // Encode a struct as an array, and not as a map.
- StructToArray bool
-
- // AsSymbols defines what should be encoded as symbols.
- //
- // Encoding as symbols can reduce the encoded size significantly.
- //
- // However, during decoding, each string to be encoded as a symbol must
- // be checked to see if it has been seen before. Consequently, encoding time
- // will increase if using symbols, because string comparisons has a clear cost.
- //
- // Sample values:
- // AsSymbolNone
- // AsSymbolAll
- // AsSymbolMapStringKeys
- // AsSymbolMapStringKeysFlag | AsSymbolStructFieldNameFlag
- AsSymbols AsSymbolFlag
-}
-
-// ---------------------------------------------
-
-type simpleIoEncWriterWriter struct {
- w io.Writer
- bw io.ByteWriter
- sw ioEncStringWriter
-}
-
-func (o *simpleIoEncWriterWriter) WriteByte(c byte) (err error) {
- if o.bw != nil {
- return o.bw.WriteByte(c)
- }
- _, err = o.w.Write([]byte{c})
- return
-}
-
-func (o *simpleIoEncWriterWriter) WriteString(s string) (n int, err error) {
- if o.sw != nil {
- return o.sw.WriteString(s)
- }
- return o.w.Write([]byte(s))
-}
-
-func (o *simpleIoEncWriterWriter) Write(p []byte) (n int, err error) {
- return o.w.Write(p)
-}
-
-// ----------------------------------------
-
-// ioEncWriter implements encWriter and can write to an io.Writer implementation
-type ioEncWriter struct {
- w ioEncWriterWriter
- x [8]byte // temp byte array re-used internally for efficiency
-}
-
-func (z *ioEncWriter) writeUint16(v uint16) {
- bigen.PutUint16(z.x[:2], v)
- z.writeb(z.x[:2])
-}
-
-func (z *ioEncWriter) writeUint32(v uint32) {
- bigen.PutUint32(z.x[:4], v)
- z.writeb(z.x[:4])
-}
-
-func (z *ioEncWriter) writeUint64(v uint64) {
- bigen.PutUint64(z.x[:8], v)
- z.writeb(z.x[:8])
-}
-
-func (z *ioEncWriter) writeb(bs []byte) {
- if len(bs) == 0 {
- return
- }
- n, err := z.w.Write(bs)
- if err != nil {
- panic(err)
- }
- if n != len(bs) {
- encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(bs), n)
- }
-}
-
-func (z *ioEncWriter) writestr(s string) {
- n, err := z.w.WriteString(s)
- if err != nil {
- panic(err)
- }
- if n != len(s) {
- encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(s), n)
- }
-}
-
-func (z *ioEncWriter) writen1(b byte) {
- if err := z.w.WriteByte(b); err != nil {
- panic(err)
- }
-}
-
-func (z *ioEncWriter) writen2(b1 byte, b2 byte) {
- z.writen1(b1)
- z.writen1(b2)
-}
-
-func (z *ioEncWriter) atEndOfEncode() {}
-
-// ----------------------------------------
-
-// bytesEncWriter implements encWriter and can write to an byte slice.
-// It is used by Marshal function.
-type bytesEncWriter struct {
- b []byte
- c int // cursor
- out *[]byte // write out on atEndOfEncode
-}
-
-func (z *bytesEncWriter) writeUint16(v uint16) {
- c := z.grow(2)
- z.b[c] = byte(v >> 8)
- z.b[c+1] = byte(v)
-}
-
-func (z *bytesEncWriter) writeUint32(v uint32) {
- c := z.grow(4)
- z.b[c] = byte(v >> 24)
- z.b[c+1] = byte(v >> 16)
- z.b[c+2] = byte(v >> 8)
- z.b[c+3] = byte(v)
-}
-
-func (z *bytesEncWriter) writeUint64(v uint64) {
- c := z.grow(8)
- z.b[c] = byte(v >> 56)
- z.b[c+1] = byte(v >> 48)
- z.b[c+2] = byte(v >> 40)
- z.b[c+3] = byte(v >> 32)
- z.b[c+4] = byte(v >> 24)
- z.b[c+5] = byte(v >> 16)
- z.b[c+6] = byte(v >> 8)
- z.b[c+7] = byte(v)
-}
-
-func (z *bytesEncWriter) writeb(s []byte) {
- if len(s) == 0 {
- return
- }
- c := z.grow(len(s))
- copy(z.b[c:], s)
-}
-
-func (z *bytesEncWriter) writestr(s string) {
- c := z.grow(len(s))
- copy(z.b[c:], s)
-}
-
-func (z *bytesEncWriter) writen1(b1 byte) {
- c := z.grow(1)
- z.b[c] = b1
-}
-
-func (z *bytesEncWriter) writen2(b1 byte, b2 byte) {
- c := z.grow(2)
- z.b[c] = b1
- z.b[c+1] = b2
-}
-
-func (z *bytesEncWriter) atEndOfEncode() {
- *(z.out) = z.b[:z.c]
-}
-
-func (z *bytesEncWriter) grow(n int) (oldcursor int) {
- oldcursor = z.c
- z.c = oldcursor + n
- if z.c > cap(z.b) {
- // Tried using appendslice logic: (if cap < 1024, *2, else *1.25).
- // However, it was too expensive, causing too many iterations of copy.
- // Using bytes.Buffer model was much better (2*cap + n)
- bs := make([]byte, 2*cap(z.b)+n)
- copy(bs, z.b[:oldcursor])
- z.b = bs
- } else if z.c > len(z.b) {
- z.b = z.b[:cap(z.b)]
- }
- return
-}
-
-// ---------------------------------------------
-
-type encFnInfo struct {
- ti *typeInfo
- e *Encoder
- ee encDriver
- xfFn func(reflect.Value) ([]byte, error)
- xfTag byte
-}
-
-func (f *encFnInfo) builtin(rv reflect.Value) {
- f.ee.encodeBuiltin(f.ti.rtid, rv.Interface())
-}
-
-func (f *encFnInfo) rawExt(rv reflect.Value) {
- f.e.encRawExt(rv.Interface().(RawExt))
-}
-
-func (f *encFnInfo) ext(rv reflect.Value) {
- bs, fnerr := f.xfFn(rv)
- if fnerr != nil {
- panic(fnerr)
- }
- if bs == nil {
- f.ee.encodeNil()
- return
- }
- if f.e.hh.writeExt() {
- f.ee.encodeExtPreamble(f.xfTag, len(bs))
- f.e.w.writeb(bs)
- } else {
- f.ee.encodeStringBytes(c_RAW, bs)
- }
-
-}
-
-func (f *encFnInfo) binaryMarshal(rv reflect.Value) {
- var bm binaryMarshaler
- if f.ti.mIndir == 0 {
- bm = rv.Interface().(binaryMarshaler)
- } else if f.ti.mIndir == -1 {
- bm = rv.Addr().Interface().(binaryMarshaler)
- } else {
- for j, k := int8(0), f.ti.mIndir; j < k; j++ {
- if rv.IsNil() {
- f.ee.encodeNil()
- return
- }
- rv = rv.Elem()
- }
- bm = rv.Interface().(binaryMarshaler)
- }
- // debugf(">>>> binaryMarshaler: %T", rv.Interface())
- bs, fnerr := bm.MarshalBinary()
- if fnerr != nil {
- panic(fnerr)
- }
- if bs == nil {
- f.ee.encodeNil()
- } else {
- f.ee.encodeStringBytes(c_RAW, bs)
- }
-}
-
-func (f *encFnInfo) kBool(rv reflect.Value) {
- f.ee.encodeBool(rv.Bool())
-}
-
-func (f *encFnInfo) kString(rv reflect.Value) {
- f.ee.encodeString(c_UTF8, rv.String())
-}
-
-func (f *encFnInfo) kFloat64(rv reflect.Value) {
- f.ee.encodeFloat64(rv.Float())
-}
-
-func (f *encFnInfo) kFloat32(rv reflect.Value) {
- f.ee.encodeFloat32(float32(rv.Float()))
-}
-
-func (f *encFnInfo) kInt(rv reflect.Value) {
- f.ee.encodeInt(rv.Int())
-}
-
-func (f *encFnInfo) kUint(rv reflect.Value) {
- f.ee.encodeUint(rv.Uint())
-}
-
-func (f *encFnInfo) kInvalid(rv reflect.Value) {
- f.ee.encodeNil()
-}
-
-func (f *encFnInfo) kErr(rv reflect.Value) {
- encErr("Unsupported kind: %s, for: %#v", rv.Kind(), rv)
-}
-
-func (f *encFnInfo) kSlice(rv reflect.Value) {
- if rv.IsNil() {
- f.ee.encodeNil()
- return
- }
-
- if shortCircuitReflectToFastPath {
- switch f.ti.rtid {
- case intfSliceTypId:
- f.e.encSliceIntf(rv.Interface().([]interface{}))
- return
- case strSliceTypId:
- f.e.encSliceStr(rv.Interface().([]string))
- return
- case uint64SliceTypId:
- f.e.encSliceUint64(rv.Interface().([]uint64))
- return
- case int64SliceTypId:
- f.e.encSliceInt64(rv.Interface().([]int64))
- return
- }
- }
-
- // If in this method, then there was no extension function defined.
- // So it's okay to treat as []byte.
- if f.ti.rtid == uint8SliceTypId || f.ti.rt.Elem().Kind() == reflect.Uint8 {
- f.ee.encodeStringBytes(c_RAW, rv.Bytes())
- return
- }
-
- l := rv.Len()
- if f.ti.mbs {
- if l%2 == 1 {
- encErr("mapBySlice: invalid length (must be divisible by 2): %v", l)
- }
- f.ee.encodeMapPreamble(l / 2)
- } else {
- f.ee.encodeArrayPreamble(l)
- }
- if l == 0 {
- return
- }
- for j := 0; j < l; j++ {
- // TODO: Consider perf implication of encoding odd index values as symbols if type is string
- f.e.encodeValue(rv.Index(j))
- }
-}
-
-func (f *encFnInfo) kArray(rv reflect.Value) {
- // We cannot share kSlice method, because the array may be non-addressable.
- // E.g. type struct S{B [2]byte}; Encode(S{}) will bomb on "panic: slice of unaddressable array".
- // So we have to duplicate the functionality here.
- // f.e.encodeValue(rv.Slice(0, rv.Len()))
- // f.kSlice(rv.Slice(0, rv.Len()))
-
- l := rv.Len()
- // Handle an array of bytes specially (in line with what is done for slices)
- if f.ti.rt.Elem().Kind() == reflect.Uint8 {
- if l == 0 {
- f.ee.encodeStringBytes(c_RAW, nil)
- return
- }
- var bs []byte
- if rv.CanAddr() {
- bs = rv.Slice(0, l).Bytes()
- } else {
- bs = make([]byte, l)
- for i := 0; i < l; i++ {
- bs[i] = byte(rv.Index(i).Uint())
- }
- }
- f.ee.encodeStringBytes(c_RAW, bs)
- return
- }
-
- if f.ti.mbs {
- if l%2 == 1 {
- encErr("mapBySlice: invalid length (must be divisible by 2): %v", l)
- }
- f.ee.encodeMapPreamble(l / 2)
- } else {
- f.ee.encodeArrayPreamble(l)
- }
- if l == 0 {
- return
- }
- for j := 0; j < l; j++ {
- // TODO: Consider perf implication of encoding odd index values as symbols if type is string
- f.e.encodeValue(rv.Index(j))
- }
-}
-
-func (f *encFnInfo) kStruct(rv reflect.Value) {
- fti := f.ti
- newlen := len(fti.sfi)
- rvals := make([]reflect.Value, newlen)
- var encnames []string
- e := f.e
- tisfi := fti.sfip
- toMap := !(fti.toArray || e.h.StructToArray)
- // if toMap, use the sorted array. If toArray, use unsorted array (to match sequence in struct)
- if toMap {
- tisfi = fti.sfi
- encnames = make([]string, newlen)
- }
- newlen = 0
- for _, si := range tisfi {
- if si.i != -1 {
- rvals[newlen] = rv.Field(int(si.i))
- } else {
- rvals[newlen] = rv.FieldByIndex(si.is)
- }
- if toMap {
- if si.omitEmpty && isEmptyValue(rvals[newlen]) {
- continue
- }
- encnames[newlen] = si.encName
- } else {
- if si.omitEmpty && isEmptyValue(rvals[newlen]) {
- rvals[newlen] = reflect.Value{} //encode as nil
- }
- }
- newlen++
- }
-
- // debugf(">>>> kStruct: newlen: %v", newlen)
- if toMap {
- ee := f.ee //don't dereference everytime
- ee.encodeMapPreamble(newlen)
- // asSymbols := e.h.AsSymbols&AsSymbolStructFieldNameFlag != 0
- asSymbols := e.h.AsSymbols == AsSymbolDefault || e.h.AsSymbols&AsSymbolStructFieldNameFlag != 0
- for j := 0; j < newlen; j++ {
- if asSymbols {
- ee.encodeSymbol(encnames[j])
- } else {
- ee.encodeString(c_UTF8, encnames[j])
- }
- e.encodeValue(rvals[j])
- }
- } else {
- f.ee.encodeArrayPreamble(newlen)
- for j := 0; j < newlen; j++ {
- e.encodeValue(rvals[j])
- }
- }
-}
-
-// func (f *encFnInfo) kPtr(rv reflect.Value) {
-// debugf(">>>>>>> ??? encode kPtr called - shouldn't get called")
-// if rv.IsNil() {
-// f.ee.encodeNil()
-// return
-// }
-// f.e.encodeValue(rv.Elem())
-// }
-
-func (f *encFnInfo) kInterface(rv reflect.Value) {
- if rv.IsNil() {
- f.ee.encodeNil()
- return
- }
- f.e.encodeValue(rv.Elem())
-}
-
-func (f *encFnInfo) kMap(rv reflect.Value) {
- if rv.IsNil() {
- f.ee.encodeNil()
- return
- }
-
- if shortCircuitReflectToFastPath {
- switch f.ti.rtid {
- case mapIntfIntfTypId:
- f.e.encMapIntfIntf(rv.Interface().(map[interface{}]interface{}))
- return
- case mapStrIntfTypId:
- f.e.encMapStrIntf(rv.Interface().(map[string]interface{}))
- return
- case mapStrStrTypId:
- f.e.encMapStrStr(rv.Interface().(map[string]string))
- return
- case mapInt64IntfTypId:
- f.e.encMapInt64Intf(rv.Interface().(map[int64]interface{}))
- return
- case mapUint64IntfTypId:
- f.e.encMapUint64Intf(rv.Interface().(map[uint64]interface{}))
- return
- }
- }
-
- l := rv.Len()
- f.ee.encodeMapPreamble(l)
- if l == 0 {
- return
- }
- // keyTypeIsString := f.ti.rt.Key().Kind() == reflect.String
- keyTypeIsString := f.ti.rt.Key() == stringTyp
- var asSymbols bool
- if keyTypeIsString {
- asSymbols = f.e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0
- }
- mks := rv.MapKeys()
- // for j, lmks := 0, len(mks); j < lmks; j++ {
- for j := range mks {
- if keyTypeIsString {
- if asSymbols {
- f.ee.encodeSymbol(mks[j].String())
- } else {
- f.ee.encodeString(c_UTF8, mks[j].String())
- }
- } else {
- f.e.encodeValue(mks[j])
- }
- f.e.encodeValue(rv.MapIndex(mks[j]))
- }
-
-}
-
-// --------------------------------------------------
-
-// encFn encapsulates the captured variables and the encode function.
-// This way, we only do some calculations one times, and pass to the
-// code block that should be called (encapsulated in a function)
-// instead of executing the checks every time.
-type encFn struct {
- i *encFnInfo
- f func(*encFnInfo, reflect.Value)
-}
-
-// --------------------------------------------------
-
-// An Encoder writes an object to an output stream in the codec format.
-type Encoder struct {
- w encWriter
- e encDriver
- h *BasicHandle
- hh Handle
- f map[uintptr]encFn
- x []uintptr
- s []encFn
-}
-
-// NewEncoder returns an Encoder for encoding into an io.Writer.
-//
-// For efficiency, Users are encouraged to pass in a memory buffered writer
-// (eg bufio.Writer, bytes.Buffer).
-func NewEncoder(w io.Writer, h Handle) *Encoder {
- ww, ok := w.(ioEncWriterWriter)
- if !ok {
- sww := simpleIoEncWriterWriter{w: w}
- sww.bw, _ = w.(io.ByteWriter)
- sww.sw, _ = w.(ioEncStringWriter)
- ww = &sww
- //ww = bufio.NewWriterSize(w, defEncByteBufSize)
- }
- z := ioEncWriter{
- w: ww,
- }
- return &Encoder{w: &z, hh: h, h: h.getBasicHandle(), e: h.newEncDriver(&z)}
-}
-
-// NewEncoderBytes returns an encoder for encoding directly and efficiently
-// into a byte slice, using zero-copying to temporary slices.
-//
-// It will potentially replace the output byte slice pointed to.
-// After encoding, the out parameter contains the encoded contents.
-func NewEncoderBytes(out *[]byte, h Handle) *Encoder {
- in := *out
- if in == nil {
- in = make([]byte, defEncByteBufSize)
- }
- z := bytesEncWriter{
- b: in,
- out: out,
- }
- return &Encoder{w: &z, hh: h, h: h.getBasicHandle(), e: h.newEncDriver(&z)}
-}
-
-// Encode writes an object into a stream in the codec format.
-//
-// Encoding can be configured via the "codec" struct tag for the fields.
-//
-// The "codec" key in struct field's tag value is the key name,
-// followed by an optional comma and options.
-//
-// To set an option on all fields (e.g. omitempty on all fields), you
-// can create a field called _struct, and set flags on it.
-//
-// Struct values "usually" encode as maps. Each exported struct field is encoded unless:
-// - the field's codec tag is "-", OR
-// - the field is empty and its codec tag specifies the "omitempty" option.
-//
-// When encoding as a map, the first string in the tag (before the comma)
-// is the map key string to use when encoding.
-//
-// However, struct values may encode as arrays. This happens when:
-// - StructToArray Encode option is set, OR
-// - the codec tag on the _struct field sets the "toarray" option
-//
-// Values with types that implement MapBySlice are encoded as stream maps.
-//
-// The empty values (for omitempty option) are false, 0, any nil pointer
-// or interface value, and any array, slice, map, or string of length zero.
-//
-// Anonymous fields are encoded inline if no struct tag is present.
-// Else they are encoded as regular fields.
-//
-// Examples:
-//
-// type MyStruct struct {
-// _struct bool `codec:",omitempty"` //set omitempty for every field
-// Field1 string `codec:"-"` //skip this field
-// Field2 int `codec:"myName"` //Use key "myName" in encode stream
-// Field3 int32 `codec:",omitempty"` //use key "Field3". Omit if empty.
-// Field4 bool `codec:"f4,omitempty"` //use key "f4". Omit if empty.
-// ...
-// }
-//
-// type MyStruct struct {
-// _struct bool `codec:",omitempty,toarray"` //set omitempty for every field
-// //and encode struct as an array
-// }
-//
-// The mode of encoding is based on the type of the value. When a value is seen:
-// - If an extension is registered for it, call that extension function
-// - If it implements BinaryMarshaler, call its MarshalBinary() (data []byte, err error)
-// - Else encode it based on its reflect.Kind
-//
-// Note that struct field names and keys in map[string]XXX will be treated as symbols.
-// Some formats support symbols (e.g. binc) and will properly encode the string
-// only once in the stream, and use a tag to refer to it thereafter.
-func (e *Encoder) Encode(v interface{}) (err error) {
- defer panicToErr(&err)
- e.encode(v)
- e.w.atEndOfEncode()
- return
-}
-
-func (e *Encoder) encode(iv interface{}) {
- switch v := iv.(type) {
- case nil:
- e.e.encodeNil()
-
- case reflect.Value:
- e.encodeValue(v)
-
- case string:
- e.e.encodeString(c_UTF8, v)
- case bool:
- e.e.encodeBool(v)
- case int:
- e.e.encodeInt(int64(v))
- case int8:
- e.e.encodeInt(int64(v))
- case int16:
- e.e.encodeInt(int64(v))
- case int32:
- e.e.encodeInt(int64(v))
- case int64:
- e.e.encodeInt(v)
- case uint:
- e.e.encodeUint(uint64(v))
- case uint8:
- e.e.encodeUint(uint64(v))
- case uint16:
- e.e.encodeUint(uint64(v))
- case uint32:
- e.e.encodeUint(uint64(v))
- case uint64:
- e.e.encodeUint(v)
- case float32:
- e.e.encodeFloat32(v)
- case float64:
- e.e.encodeFloat64(v)
-
- case []interface{}:
- e.encSliceIntf(v)
- case []string:
- e.encSliceStr(v)
- case []int64:
- e.encSliceInt64(v)
- case []uint64:
- e.encSliceUint64(v)
- case []uint8:
- e.e.encodeStringBytes(c_RAW, v)
-
- case map[interface{}]interface{}:
- e.encMapIntfIntf(v)
- case map[string]interface{}:
- e.encMapStrIntf(v)
- case map[string]string:
- e.encMapStrStr(v)
- case map[int64]interface{}:
- e.encMapInt64Intf(v)
- case map[uint64]interface{}:
- e.encMapUint64Intf(v)
-
- case *string:
- e.e.encodeString(c_UTF8, *v)
- case *bool:
- e.e.encodeBool(*v)
- case *int:
- e.e.encodeInt(int64(*v))
- case *int8:
- e.e.encodeInt(int64(*v))
- case *int16:
- e.e.encodeInt(int64(*v))
- case *int32:
- e.e.encodeInt(int64(*v))
- case *int64:
- e.e.encodeInt(*v)
- case *uint:
- e.e.encodeUint(uint64(*v))
- case *uint8:
- e.e.encodeUint(uint64(*v))
- case *uint16:
- e.e.encodeUint(uint64(*v))
- case *uint32:
- e.e.encodeUint(uint64(*v))
- case *uint64:
- e.e.encodeUint(*v)
- case *float32:
- e.e.encodeFloat32(*v)
- case *float64:
- e.e.encodeFloat64(*v)
-
- case *[]interface{}:
- e.encSliceIntf(*v)
- case *[]string:
- e.encSliceStr(*v)
- case *[]int64:
- e.encSliceInt64(*v)
- case *[]uint64:
- e.encSliceUint64(*v)
- case *[]uint8:
- e.e.encodeStringBytes(c_RAW, *v)
-
- case *map[interface{}]interface{}:
- e.encMapIntfIntf(*v)
- case *map[string]interface{}:
- e.encMapStrIntf(*v)
- case *map[string]string:
- e.encMapStrStr(*v)
- case *map[int64]interface{}:
- e.encMapInt64Intf(*v)
- case *map[uint64]interface{}:
- e.encMapUint64Intf(*v)
-
- default:
- e.encodeValue(reflect.ValueOf(iv))
- }
-}
-
-func (e *Encoder) encodeValue(rv reflect.Value) {
- for rv.Kind() == reflect.Ptr {
- if rv.IsNil() {
- e.e.encodeNil()
- return
- }
- rv = rv.Elem()
- }
-
- rt := rv.Type()
- rtid := reflect.ValueOf(rt).Pointer()
-
- // if e.f == nil && e.s == nil { debugf("---->Creating new enc f map for type: %v\n", rt) }
- var fn encFn
- var ok bool
- if useMapForCodecCache {
- fn, ok = e.f[rtid]
- } else {
- for i, v := range e.x {
- if v == rtid {
- fn, ok = e.s[i], true
- break
- }
- }
- }
- if !ok {
- // debugf("\tCreating new enc fn for type: %v\n", rt)
- fi := encFnInfo{ti: getTypeInfo(rtid, rt), e: e, ee: e.e}
- fn.i = &fi
- if rtid == rawExtTypId {
- fn.f = (*encFnInfo).rawExt
- } else if e.e.isBuiltinType(rtid) {
- fn.f = (*encFnInfo).builtin
- } else if xfTag, xfFn := e.h.getEncodeExt(rtid); xfFn != nil {
- fi.xfTag, fi.xfFn = xfTag, xfFn
- fn.f = (*encFnInfo).ext
- } else if supportBinaryMarshal && fi.ti.m {
- fn.f = (*encFnInfo).binaryMarshal
- } else {
- switch rk := rt.Kind(); rk {
- case reflect.Bool:
- fn.f = (*encFnInfo).kBool
- case reflect.String:
- fn.f = (*encFnInfo).kString
- case reflect.Float64:
- fn.f = (*encFnInfo).kFloat64
- case reflect.Float32:
- fn.f = (*encFnInfo).kFloat32
- case reflect.Int, reflect.Int8, reflect.Int64, reflect.Int32, reflect.Int16:
- fn.f = (*encFnInfo).kInt
- case reflect.Uint8, reflect.Uint64, reflect.Uint, reflect.Uint32, reflect.Uint16:
- fn.f = (*encFnInfo).kUint
- case reflect.Invalid:
- fn.f = (*encFnInfo).kInvalid
- case reflect.Slice:
- fn.f = (*encFnInfo).kSlice
- case reflect.Array:
- fn.f = (*encFnInfo).kArray
- case reflect.Struct:
- fn.f = (*encFnInfo).kStruct
- // case reflect.Ptr:
- // fn.f = (*encFnInfo).kPtr
- case reflect.Interface:
- fn.f = (*encFnInfo).kInterface
- case reflect.Map:
- fn.f = (*encFnInfo).kMap
- default:
- fn.f = (*encFnInfo).kErr
- }
- }
- if useMapForCodecCache {
- if e.f == nil {
- e.f = make(map[uintptr]encFn, 16)
- }
- e.f[rtid] = fn
- } else {
- e.s = append(e.s, fn)
- e.x = append(e.x, rtid)
- }
- }
-
- fn.f(fn.i, rv)
-
-}
-
-func (e *Encoder) encRawExt(re RawExt) {
- if re.Data == nil {
- e.e.encodeNil()
- return
- }
- if e.hh.writeExt() {
- e.e.encodeExtPreamble(re.Tag, len(re.Data))
- e.w.writeb(re.Data)
- } else {
- e.e.encodeStringBytes(c_RAW, re.Data)
- }
-}
-
-// ---------------------------------------------
-// short circuit functions for common maps and slices
-
-func (e *Encoder) encSliceIntf(v []interface{}) {
- e.e.encodeArrayPreamble(len(v))
- for _, v2 := range v {
- e.encode(v2)
- }
-}
-
-func (e *Encoder) encSliceStr(v []string) {
- e.e.encodeArrayPreamble(len(v))
- for _, v2 := range v {
- e.e.encodeString(c_UTF8, v2)
- }
-}
-
-func (e *Encoder) encSliceInt64(v []int64) {
- e.e.encodeArrayPreamble(len(v))
- for _, v2 := range v {
- e.e.encodeInt(v2)
- }
-}
-
-func (e *Encoder) encSliceUint64(v []uint64) {
- e.e.encodeArrayPreamble(len(v))
- for _, v2 := range v {
- e.e.encodeUint(v2)
- }
-}
-
-func (e *Encoder) encMapStrStr(v map[string]string) {
- e.e.encodeMapPreamble(len(v))
- asSymbols := e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0
- for k2, v2 := range v {
- if asSymbols {
- e.e.encodeSymbol(k2)
- } else {
- e.e.encodeString(c_UTF8, k2)
- }
- e.e.encodeString(c_UTF8, v2)
- }
-}
-
-func (e *Encoder) encMapStrIntf(v map[string]interface{}) {
- e.e.encodeMapPreamble(len(v))
- asSymbols := e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0
- for k2, v2 := range v {
- if asSymbols {
- e.e.encodeSymbol(k2)
- } else {
- e.e.encodeString(c_UTF8, k2)
- }
- e.encode(v2)
- }
-}
-
-func (e *Encoder) encMapInt64Intf(v map[int64]interface{}) {
- e.e.encodeMapPreamble(len(v))
- for k2, v2 := range v {
- e.e.encodeInt(k2)
- e.encode(v2)
- }
-}
-
-func (e *Encoder) encMapUint64Intf(v map[uint64]interface{}) {
- e.e.encodeMapPreamble(len(v))
- for k2, v2 := range v {
- e.e.encodeUint(uint64(k2))
- e.encode(v2)
- }
-}
-
-func (e *Encoder) encMapIntfIntf(v map[interface{}]interface{}) {
- e.e.encodeMapPreamble(len(v))
- for k2, v2 := range v {
- e.encode(k2)
- e.encode(v2)
- }
-}
-
-// ----------------------------------------
-
-func encErr(format string, params ...interface{}) {
- doPanic(msgTagEnc, format, params...)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper.go
deleted file mode 100644
index e6dc0563..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper.go
+++ /dev/null
@@ -1,589 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-// Contains code shared by both encode and decode.
-
-import (
- "encoding/binary"
- "fmt"
- "math"
- "reflect"
- "sort"
- "strings"
- "sync"
- "time"
- "unicode"
- "unicode/utf8"
-)
-
-const (
- structTagName = "codec"
-
- // Support
- // encoding.BinaryMarshaler: MarshalBinary() (data []byte, err error)
- // encoding.BinaryUnmarshaler: UnmarshalBinary(data []byte) error
- // This constant flag will enable or disable it.
- supportBinaryMarshal = true
-
- // Each Encoder or Decoder uses a cache of functions based on conditionals,
- // so that the conditionals are not run every time.
- //
- // Either a map or a slice is used to keep track of the functions.
- // The map is more natural, but has a higher cost than a slice/array.
- // This flag (useMapForCodecCache) controls which is used.
- useMapForCodecCache = false
-
- // For some common container types, we can short-circuit an elaborate
- // reflection dance and call encode/decode directly.
- // The currently supported types are:
- // - slices of strings, or id's (int64,uint64) or interfaces.
- // - maps of str->str, str->intf, id(int64,uint64)->intf, intf->intf
- shortCircuitReflectToFastPath = true
-
- // for debugging, set this to false, to catch panic traces.
- // Note that this will always cause rpc tests to fail, since they need io.EOF sent via panic.
- recoverPanicToErr = true
-)
-
-type charEncoding uint8
-
-const (
- c_RAW charEncoding = iota
- c_UTF8
- c_UTF16LE
- c_UTF16BE
- c_UTF32LE
- c_UTF32BE
-)
-
-// valueType is the stream type
-type valueType uint8
-
-const (
- valueTypeUnset valueType = iota
- valueTypeNil
- valueTypeInt
- valueTypeUint
- valueTypeFloat
- valueTypeBool
- valueTypeString
- valueTypeSymbol
- valueTypeBytes
- valueTypeMap
- valueTypeArray
- valueTypeTimestamp
- valueTypeExt
-
- valueTypeInvalid = 0xff
-)
-
-var (
- bigen = binary.BigEndian
- structInfoFieldName = "_struct"
-
- cachedTypeInfo = make(map[uintptr]*typeInfo, 4)
- cachedTypeInfoMutex sync.RWMutex
-
- intfSliceTyp = reflect.TypeOf([]interface{}(nil))
- intfTyp = intfSliceTyp.Elem()
-
- strSliceTyp = reflect.TypeOf([]string(nil))
- boolSliceTyp = reflect.TypeOf([]bool(nil))
- uintSliceTyp = reflect.TypeOf([]uint(nil))
- uint8SliceTyp = reflect.TypeOf([]uint8(nil))
- uint16SliceTyp = reflect.TypeOf([]uint16(nil))
- uint32SliceTyp = reflect.TypeOf([]uint32(nil))
- uint64SliceTyp = reflect.TypeOf([]uint64(nil))
- intSliceTyp = reflect.TypeOf([]int(nil))
- int8SliceTyp = reflect.TypeOf([]int8(nil))
- int16SliceTyp = reflect.TypeOf([]int16(nil))
- int32SliceTyp = reflect.TypeOf([]int32(nil))
- int64SliceTyp = reflect.TypeOf([]int64(nil))
- float32SliceTyp = reflect.TypeOf([]float32(nil))
- float64SliceTyp = reflect.TypeOf([]float64(nil))
-
- mapIntfIntfTyp = reflect.TypeOf(map[interface{}]interface{}(nil))
- mapStrIntfTyp = reflect.TypeOf(map[string]interface{}(nil))
- mapStrStrTyp = reflect.TypeOf(map[string]string(nil))
-
- mapIntIntfTyp = reflect.TypeOf(map[int]interface{}(nil))
- mapInt64IntfTyp = reflect.TypeOf(map[int64]interface{}(nil))
- mapUintIntfTyp = reflect.TypeOf(map[uint]interface{}(nil))
- mapUint64IntfTyp = reflect.TypeOf(map[uint64]interface{}(nil))
-
- stringTyp = reflect.TypeOf("")
- timeTyp = reflect.TypeOf(time.Time{})
- rawExtTyp = reflect.TypeOf(RawExt{})
-
- mapBySliceTyp = reflect.TypeOf((*MapBySlice)(nil)).Elem()
- binaryMarshalerTyp = reflect.TypeOf((*binaryMarshaler)(nil)).Elem()
- binaryUnmarshalerTyp = reflect.TypeOf((*binaryUnmarshaler)(nil)).Elem()
-
- rawExtTypId = reflect.ValueOf(rawExtTyp).Pointer()
- intfTypId = reflect.ValueOf(intfTyp).Pointer()
- timeTypId = reflect.ValueOf(timeTyp).Pointer()
-
- intfSliceTypId = reflect.ValueOf(intfSliceTyp).Pointer()
- strSliceTypId = reflect.ValueOf(strSliceTyp).Pointer()
-
- boolSliceTypId = reflect.ValueOf(boolSliceTyp).Pointer()
- uintSliceTypId = reflect.ValueOf(uintSliceTyp).Pointer()
- uint8SliceTypId = reflect.ValueOf(uint8SliceTyp).Pointer()
- uint16SliceTypId = reflect.ValueOf(uint16SliceTyp).Pointer()
- uint32SliceTypId = reflect.ValueOf(uint32SliceTyp).Pointer()
- uint64SliceTypId = reflect.ValueOf(uint64SliceTyp).Pointer()
- intSliceTypId = reflect.ValueOf(intSliceTyp).Pointer()
- int8SliceTypId = reflect.ValueOf(int8SliceTyp).Pointer()
- int16SliceTypId = reflect.ValueOf(int16SliceTyp).Pointer()
- int32SliceTypId = reflect.ValueOf(int32SliceTyp).Pointer()
- int64SliceTypId = reflect.ValueOf(int64SliceTyp).Pointer()
- float32SliceTypId = reflect.ValueOf(float32SliceTyp).Pointer()
- float64SliceTypId = reflect.ValueOf(float64SliceTyp).Pointer()
-
- mapStrStrTypId = reflect.ValueOf(mapStrStrTyp).Pointer()
- mapIntfIntfTypId = reflect.ValueOf(mapIntfIntfTyp).Pointer()
- mapStrIntfTypId = reflect.ValueOf(mapStrIntfTyp).Pointer()
- mapIntIntfTypId = reflect.ValueOf(mapIntIntfTyp).Pointer()
- mapInt64IntfTypId = reflect.ValueOf(mapInt64IntfTyp).Pointer()
- mapUintIntfTypId = reflect.ValueOf(mapUintIntfTyp).Pointer()
- mapUint64IntfTypId = reflect.ValueOf(mapUint64IntfTyp).Pointer()
- // Id = reflect.ValueOf().Pointer()
- // mapBySliceTypId = reflect.ValueOf(mapBySliceTyp).Pointer()
-
- binaryMarshalerTypId = reflect.ValueOf(binaryMarshalerTyp).Pointer()
- binaryUnmarshalerTypId = reflect.ValueOf(binaryUnmarshalerTyp).Pointer()
-
- intBitsize uint8 = uint8(reflect.TypeOf(int(0)).Bits())
- uintBitsize uint8 = uint8(reflect.TypeOf(uint(0)).Bits())
-
- bsAll0x00 = []byte{0, 0, 0, 0, 0, 0, 0, 0}
- bsAll0xff = []byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}
-)
-
-type binaryUnmarshaler interface {
- UnmarshalBinary(data []byte) error
-}
-
-type binaryMarshaler interface {
- MarshalBinary() (data []byte, err error)
-}
-
-// MapBySlice represents a slice which should be encoded as a map in the stream.
-// The slice contains a sequence of key-value pairs.
-type MapBySlice interface {
- MapBySlice()
-}
-
-// WARNING: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.
-//
-// BasicHandle encapsulates the common options and extension functions.
-type BasicHandle struct {
- extHandle
- EncodeOptions
- DecodeOptions
-}
-
-// Handle is the interface for a specific encoding format.
-//
-// Typically, a Handle is pre-configured before first time use,
-// and not modified while in use. Such a pre-configured Handle
-// is safe for concurrent access.
-type Handle interface {
- writeExt() bool
- getBasicHandle() *BasicHandle
- newEncDriver(w encWriter) encDriver
- newDecDriver(r decReader) decDriver
-}
-
-// RawExt represents raw unprocessed extension data.
-type RawExt struct {
- Tag byte
- Data []byte
-}
-
-type extTypeTagFn struct {
- rtid uintptr
- rt reflect.Type
- tag byte
- encFn func(reflect.Value) ([]byte, error)
- decFn func(reflect.Value, []byte) error
-}
-
-type extHandle []*extTypeTagFn
-
-// AddExt registers an encode and decode function for a reflect.Type.
-// Note that the type must be a named type, and specifically not
-// a pointer or Interface. An error is returned if that is not honored.
-//
-// To Deregister an ext, call AddExt with 0 tag, nil encfn and nil decfn.
-func (o *extHandle) AddExt(
- rt reflect.Type,
- tag byte,
- encfn func(reflect.Value) ([]byte, error),
- decfn func(reflect.Value, []byte) error,
-) (err error) {
- // o is a pointer, because we may need to initialize it
- if rt.PkgPath() == "" || rt.Kind() == reflect.Interface {
- err = fmt.Errorf("codec.Handle.AddExt: Takes named type, especially not a pointer or interface: %T",
- reflect.Zero(rt).Interface())
- return
- }
-
- // o cannot be nil, since it is always embedded in a Handle.
- // if nil, let it panic.
- // if o == nil {
- // err = errors.New("codec.Handle.AddExt: extHandle cannot be a nil pointer.")
- // return
- // }
-
- rtid := reflect.ValueOf(rt).Pointer()
- for _, v := range *o {
- if v.rtid == rtid {
- v.tag, v.encFn, v.decFn = tag, encfn, decfn
- return
- }
- }
-
- *o = append(*o, &extTypeTagFn{rtid, rt, tag, encfn, decfn})
- return
-}
-
-func (o extHandle) getExt(rtid uintptr) *extTypeTagFn {
- for _, v := range o {
- if v.rtid == rtid {
- return v
- }
- }
- return nil
-}
-
-func (o extHandle) getExtForTag(tag byte) *extTypeTagFn {
- for _, v := range o {
- if v.tag == tag {
- return v
- }
- }
- return nil
-}
-
-func (o extHandle) getDecodeExtForTag(tag byte) (
- rv reflect.Value, fn func(reflect.Value, []byte) error) {
- if x := o.getExtForTag(tag); x != nil {
- // ext is only registered for base
- rv = reflect.New(x.rt).Elem()
- fn = x.decFn
- }
- return
-}
-
-func (o extHandle) getDecodeExt(rtid uintptr) (tag byte, fn func(reflect.Value, []byte) error) {
- if x := o.getExt(rtid); x != nil {
- tag = x.tag
- fn = x.decFn
- }
- return
-}
-
-func (o extHandle) getEncodeExt(rtid uintptr) (tag byte, fn func(reflect.Value) ([]byte, error)) {
- if x := o.getExt(rtid); x != nil {
- tag = x.tag
- fn = x.encFn
- }
- return
-}
-
-type structFieldInfo struct {
- encName string // encode name
-
- // only one of 'i' or 'is' can be set. If 'i' is -1, then 'is' has been set.
-
- is []int // (recursive/embedded) field index in struct
- i int16 // field index in struct
- omitEmpty bool
- toArray bool // if field is _struct, is the toArray set?
-
- // tag string // tag
- // name string // field name
- // encNameBs []byte // encoded name as byte stream
- // ikind int // kind of the field as an int i.e. int(reflect.Kind)
-}
-
-func parseStructFieldInfo(fname string, stag string) *structFieldInfo {
- if fname == "" {
- panic("parseStructFieldInfo: No Field Name")
- }
- si := structFieldInfo{
- // name: fname,
- encName: fname,
- // tag: stag,
- }
-
- if stag != "" {
- for i, s := range strings.Split(stag, ",") {
- if i == 0 {
- if s != "" {
- si.encName = s
- }
- } else {
- switch s {
- case "omitempty":
- si.omitEmpty = true
- case "toarray":
- si.toArray = true
- }
- }
- }
- }
- // si.encNameBs = []byte(si.encName)
- return &si
-}
-
-type sfiSortedByEncName []*structFieldInfo
-
-func (p sfiSortedByEncName) Len() int {
- return len(p)
-}
-
-func (p sfiSortedByEncName) Less(i, j int) bool {
- return p[i].encName < p[j].encName
-}
-
-func (p sfiSortedByEncName) Swap(i, j int) {
- p[i], p[j] = p[j], p[i]
-}
-
-// typeInfo keeps information about each type referenced in the encode/decode sequence.
-//
-// During an encode/decode sequence, we work as below:
-// - If base is a built in type, en/decode base value
-// - If base is registered as an extension, en/decode base value
-// - If type is binary(M/Unm)arshaler, call Binary(M/Unm)arshal method
-// - Else decode appropriately based on the reflect.Kind
-type typeInfo struct {
- sfi []*structFieldInfo // sorted. Used when enc/dec struct to map.
- sfip []*structFieldInfo // unsorted. Used when enc/dec struct to array.
-
- rt reflect.Type
- rtid uintptr
-
- // baseId gives pointer to the base reflect.Type, after deferencing
- // the pointers. E.g. base type of ***time.Time is time.Time.
- base reflect.Type
- baseId uintptr
- baseIndir int8 // number of indirections to get to base
-
- mbs bool // base type (T or *T) is a MapBySlice
-
- m bool // base type (T or *T) is a binaryMarshaler
- unm bool // base type (T or *T) is a binaryUnmarshaler
- mIndir int8 // number of indirections to get to binaryMarshaler type
- unmIndir int8 // number of indirections to get to binaryUnmarshaler type
- toArray bool // whether this (struct) type should be encoded as an array
-}
-
-func (ti *typeInfo) indexForEncName(name string) int {
- //tisfi := ti.sfi
- const binarySearchThreshold = 16
- if sfilen := len(ti.sfi); sfilen < binarySearchThreshold {
- // linear search. faster than binary search in my testing up to 16-field structs.
- for i, si := range ti.sfi {
- if si.encName == name {
- return i
- }
- }
- } else {
- // binary search. adapted from sort/search.go.
- h, i, j := 0, 0, sfilen
- for i < j {
- h = i + (j-i)/2
- if ti.sfi[h].encName < name {
- i = h + 1
- } else {
- j = h
- }
- }
- if i < sfilen && ti.sfi[i].encName == name {
- return i
- }
- }
- return -1
-}
-
-func getTypeInfo(rtid uintptr, rt reflect.Type) (pti *typeInfo) {
- var ok bool
- cachedTypeInfoMutex.RLock()
- pti, ok = cachedTypeInfo[rtid]
- cachedTypeInfoMutex.RUnlock()
- if ok {
- return
- }
-
- cachedTypeInfoMutex.Lock()
- defer cachedTypeInfoMutex.Unlock()
- if pti, ok = cachedTypeInfo[rtid]; ok {
- return
- }
-
- ti := typeInfo{rt: rt, rtid: rtid}
- pti = &ti
-
- var indir int8
- if ok, indir = implementsIntf(rt, binaryMarshalerTyp); ok {
- ti.m, ti.mIndir = true, indir
- }
- if ok, indir = implementsIntf(rt, binaryUnmarshalerTyp); ok {
- ti.unm, ti.unmIndir = true, indir
- }
- if ok, _ = implementsIntf(rt, mapBySliceTyp); ok {
- ti.mbs = true
- }
-
- pt := rt
- var ptIndir int8
- // for ; pt.Kind() == reflect.Ptr; pt, ptIndir = pt.Elem(), ptIndir+1 { }
- for pt.Kind() == reflect.Ptr {
- pt = pt.Elem()
- ptIndir++
- }
- if ptIndir == 0 {
- ti.base = rt
- ti.baseId = rtid
- } else {
- ti.base = pt
- ti.baseId = reflect.ValueOf(pt).Pointer()
- ti.baseIndir = ptIndir
- }
-
- if rt.Kind() == reflect.Struct {
- var siInfo *structFieldInfo
- if f, ok := rt.FieldByName(structInfoFieldName); ok {
- siInfo = parseStructFieldInfo(structInfoFieldName, f.Tag.Get(structTagName))
- ti.toArray = siInfo.toArray
- }
- sfip := make([]*structFieldInfo, 0, rt.NumField())
- rgetTypeInfo(rt, nil, make(map[string]bool), &sfip, siInfo)
-
- // // try to put all si close together
- // const tryToPutAllStructFieldInfoTogether = true
- // if tryToPutAllStructFieldInfoTogether {
- // sfip2 := make([]structFieldInfo, len(sfip))
- // for i, si := range sfip {
- // sfip2[i] = *si
- // }
- // for i := range sfip {
- // sfip[i] = &sfip2[i]
- // }
- // }
-
- ti.sfip = make([]*structFieldInfo, len(sfip))
- ti.sfi = make([]*structFieldInfo, len(sfip))
- copy(ti.sfip, sfip)
- sort.Sort(sfiSortedByEncName(sfip))
- copy(ti.sfi, sfip)
- }
- // sfi = sfip
- cachedTypeInfo[rtid] = pti
- return
-}
-
-func rgetTypeInfo(rt reflect.Type, indexstack []int, fnameToHastag map[string]bool,
- sfi *[]*structFieldInfo, siInfo *structFieldInfo,
-) {
- // for rt.Kind() == reflect.Ptr {
- // // indexstack = append(indexstack, 0)
- // rt = rt.Elem()
- // }
- for j := 0; j < rt.NumField(); j++ {
- f := rt.Field(j)
- stag := f.Tag.Get(structTagName)
- if stag == "-" {
- continue
- }
- if r1, _ := utf8.DecodeRuneInString(f.Name); r1 == utf8.RuneError || !unicode.IsUpper(r1) {
- continue
- }
- // if anonymous and there is no struct tag and its a struct (or pointer to struct), inline it.
- if f.Anonymous && stag == "" {
- ft := f.Type
- for ft.Kind() == reflect.Ptr {
- ft = ft.Elem()
- }
- if ft.Kind() == reflect.Struct {
- indexstack2 := append(append(make([]int, 0, len(indexstack)+4), indexstack...), j)
- rgetTypeInfo(ft, indexstack2, fnameToHastag, sfi, siInfo)
- continue
- }
- }
- // do not let fields with same name in embedded structs override field at higher level.
- // this must be done after anonymous check, to allow anonymous field
- // still include their child fields
- if _, ok := fnameToHastag[f.Name]; ok {
- continue
- }
- si := parseStructFieldInfo(f.Name, stag)
- // si.ikind = int(f.Type.Kind())
- if len(indexstack) == 0 {
- si.i = int16(j)
- } else {
- si.i = -1
- si.is = append(append(make([]int, 0, len(indexstack)+4), indexstack...), j)
- }
-
- if siInfo != nil {
- if siInfo.omitEmpty {
- si.omitEmpty = true
- }
- }
- *sfi = append(*sfi, si)
- fnameToHastag[f.Name] = stag != ""
- }
-}
-
-func panicToErr(err *error) {
- if recoverPanicToErr {
- if x := recover(); x != nil {
- //debug.PrintStack()
- panicValToErr(x, err)
- }
- }
-}
-
-func doPanic(tag string, format string, params ...interface{}) {
- params2 := make([]interface{}, len(params)+1)
- params2[0] = tag
- copy(params2[1:], params)
- panic(fmt.Errorf("%s: "+format, params2...))
-}
-
-func checkOverflowFloat32(f float64, doCheck bool) {
- if !doCheck {
- return
- }
- // check overflow (logic adapted from std pkg reflect/value.go OverflowFloat()
- f2 := f
- if f2 < 0 {
- f2 = -f
- }
- if math.MaxFloat32 < f2 && f2 <= math.MaxFloat64 {
- decErr("Overflow float32 value: %v", f2)
- }
-}
-
-func checkOverflow(ui uint64, i int64, bitsize uint8) {
- // check overflow (logic adapted from std pkg reflect/value.go OverflowUint()
- if bitsize == 0 {
- return
- }
- if i != 0 {
- if trunc := (i << (64 - bitsize)) >> (64 - bitsize); i != trunc {
- decErr("Overflow int value: %v", i)
- }
- }
- if ui != 0 {
- if trunc := (ui << (64 - bitsize)) >> (64 - bitsize); ui != trunc {
- decErr("Overflow uint value: %v", ui)
- }
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper_internal.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper_internal.go
deleted file mode 100644
index 58417da9..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/helper_internal.go
+++ /dev/null
@@ -1,127 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-// All non-std package dependencies live in this file,
-// so porting to different environment is easy (just update functions).
-
-import (
- "errors"
- "fmt"
- "math"
- "reflect"
-)
-
-var (
- raisePanicAfterRecover = false
- debugging = true
-)
-
-func panicValToErr(panicVal interface{}, err *error) {
- switch xerr := panicVal.(type) {
- case error:
- *err = xerr
- case string:
- *err = errors.New(xerr)
- default:
- *err = fmt.Errorf("%v", panicVal)
- }
- if raisePanicAfterRecover {
- panic(panicVal)
- }
- return
-}
-
-func isEmptyValueDeref(v reflect.Value, deref bool) bool {
- switch v.Kind() {
- case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
- return v.Len() == 0
- case reflect.Bool:
- return !v.Bool()
- case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
- return v.Int() == 0
- case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
- return v.Uint() == 0
- case reflect.Float32, reflect.Float64:
- return v.Float() == 0
- case reflect.Interface, reflect.Ptr:
- if deref {
- if v.IsNil() {
- return true
- }
- return isEmptyValueDeref(v.Elem(), deref)
- } else {
- return v.IsNil()
- }
- case reflect.Struct:
- // return true if all fields are empty. else return false.
-
- // we cannot use equality check, because some fields may be maps/slices/etc
- // and consequently the structs are not comparable.
- // return v.Interface() == reflect.Zero(v.Type()).Interface()
- for i, n := 0, v.NumField(); i < n; i++ {
- if !isEmptyValueDeref(v.Field(i), deref) {
- return false
- }
- }
- return true
- }
- return false
-}
-
-func isEmptyValue(v reflect.Value) bool {
- return isEmptyValueDeref(v, true)
-}
-
-func debugf(format string, args ...interface{}) {
- if debugging {
- if len(format) == 0 || format[len(format)-1] != '\n' {
- format = format + "\n"
- }
- fmt.Printf(format, args...)
- }
-}
-
-func pruneSignExt(v []byte, pos bool) (n int) {
- if len(v) < 2 {
- } else if pos && v[0] == 0 {
- for ; v[n] == 0 && n+1 < len(v) && (v[n+1]&(1<<7) == 0); n++ {
- }
- } else if !pos && v[0] == 0xff {
- for ; v[n] == 0xff && n+1 < len(v) && (v[n+1]&(1<<7) != 0); n++ {
- }
- }
- return
-}
-
-func implementsIntf(typ, iTyp reflect.Type) (success bool, indir int8) {
- if typ == nil {
- return
- }
- rt := typ
- // The type might be a pointer and we need to keep
- // dereferencing to the base type until we find an implementation.
- for {
- if rt.Implements(iTyp) {
- return true, indir
- }
- if p := rt; p.Kind() == reflect.Ptr {
- indir++
- if indir >= math.MaxInt8 { // insane number of indirections
- return false, 0
- }
- rt = p.Elem()
- continue
- }
- break
- }
- // No luck yet, but if this is a base type (non-pointer), the pointer might satisfy.
- if typ.Kind() != reflect.Ptr {
- // Not a pointer, but does the pointer work?
- if reflect.PtrTo(typ).Implements(iTyp) {
- return true, -1
- }
- }
- return false, 0
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack.go
deleted file mode 100644
index da0500d1..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack.go
+++ /dev/null
@@ -1,816 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-/*
-MSGPACK
-
-Msgpack-c implementation powers the c, c++, python, ruby, etc libraries.
-We need to maintain compatibility with it and how it encodes integer values
-without caring about the type.
-
-For compatibility with behaviour of msgpack-c reference implementation:
- - Go intX (>0) and uintX
- IS ENCODED AS
- msgpack +ve fixnum, unsigned
- - Go intX (<0)
- IS ENCODED AS
- msgpack -ve fixnum, signed
-
-*/
-package codec
-
-import (
- "fmt"
- "io"
- "math"
- "net/rpc"
-)
-
-const (
- mpPosFixNumMin byte = 0x00
- mpPosFixNumMax = 0x7f
- mpFixMapMin = 0x80
- mpFixMapMax = 0x8f
- mpFixArrayMin = 0x90
- mpFixArrayMax = 0x9f
- mpFixStrMin = 0xa0
- mpFixStrMax = 0xbf
- mpNil = 0xc0
- _ = 0xc1
- mpFalse = 0xc2
- mpTrue = 0xc3
- mpFloat = 0xca
- mpDouble = 0xcb
- mpUint8 = 0xcc
- mpUint16 = 0xcd
- mpUint32 = 0xce
- mpUint64 = 0xcf
- mpInt8 = 0xd0
- mpInt16 = 0xd1
- mpInt32 = 0xd2
- mpInt64 = 0xd3
-
- // extensions below
- mpBin8 = 0xc4
- mpBin16 = 0xc5
- mpBin32 = 0xc6
- mpExt8 = 0xc7
- mpExt16 = 0xc8
- mpExt32 = 0xc9
- mpFixExt1 = 0xd4
- mpFixExt2 = 0xd5
- mpFixExt4 = 0xd6
- mpFixExt8 = 0xd7
- mpFixExt16 = 0xd8
-
- mpStr8 = 0xd9 // new
- mpStr16 = 0xda
- mpStr32 = 0xdb
-
- mpArray16 = 0xdc
- mpArray32 = 0xdd
-
- mpMap16 = 0xde
- mpMap32 = 0xdf
-
- mpNegFixNumMin = 0xe0
- mpNegFixNumMax = 0xff
-)
-
-// MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec
-// that the backend RPC service takes multiple arguments, which have been arranged
-// in sequence in the slice.
-//
-// The Codec then passes it AS-IS to the rpc service (without wrapping it in an
-// array of 1 element).
-type MsgpackSpecRpcMultiArgs []interface{}
-
-// A MsgpackContainer type specifies the different types of msgpackContainers.
-type msgpackContainerType struct {
- fixCutoff int
- bFixMin, b8, b16, b32 byte
- hasFixMin, has8, has8Always bool
-}
-
-var (
- msgpackContainerStr = msgpackContainerType{32, mpFixStrMin, mpStr8, mpStr16, mpStr32, true, true, false}
- msgpackContainerBin = msgpackContainerType{0, 0, mpBin8, mpBin16, mpBin32, false, true, true}
- msgpackContainerList = msgpackContainerType{16, mpFixArrayMin, 0, mpArray16, mpArray32, true, false, false}
- msgpackContainerMap = msgpackContainerType{16, mpFixMapMin, 0, mpMap16, mpMap32, true, false, false}
-)
-
-//---------------------------------------------
-
-type msgpackEncDriver struct {
- w encWriter
- h *MsgpackHandle
-}
-
-func (e *msgpackEncDriver) isBuiltinType(rt uintptr) bool {
- //no builtin types. All encodings are based on kinds. Types supported as extensions.
- return false
-}
-
-func (e *msgpackEncDriver) encodeBuiltin(rt uintptr, v interface{}) {}
-
-func (e *msgpackEncDriver) encodeNil() {
- e.w.writen1(mpNil)
-}
-
-func (e *msgpackEncDriver) encodeInt(i int64) {
-
- switch {
- case i >= 0:
- e.encodeUint(uint64(i))
- case i >= -32:
- e.w.writen1(byte(i))
- case i >= math.MinInt8:
- e.w.writen2(mpInt8, byte(i))
- case i >= math.MinInt16:
- e.w.writen1(mpInt16)
- e.w.writeUint16(uint16(i))
- case i >= math.MinInt32:
- e.w.writen1(mpInt32)
- e.w.writeUint32(uint32(i))
- default:
- e.w.writen1(mpInt64)
- e.w.writeUint64(uint64(i))
- }
-}
-
-func (e *msgpackEncDriver) encodeUint(i uint64) {
- switch {
- case i <= math.MaxInt8:
- e.w.writen1(byte(i))
- case i <= math.MaxUint8:
- e.w.writen2(mpUint8, byte(i))
- case i <= math.MaxUint16:
- e.w.writen1(mpUint16)
- e.w.writeUint16(uint16(i))
- case i <= math.MaxUint32:
- e.w.writen1(mpUint32)
- e.w.writeUint32(uint32(i))
- default:
- e.w.writen1(mpUint64)
- e.w.writeUint64(uint64(i))
- }
-}
-
-func (e *msgpackEncDriver) encodeBool(b bool) {
- if b {
- e.w.writen1(mpTrue)
- } else {
- e.w.writen1(mpFalse)
- }
-}
-
-func (e *msgpackEncDriver) encodeFloat32(f float32) {
- e.w.writen1(mpFloat)
- e.w.writeUint32(math.Float32bits(f))
-}
-
-func (e *msgpackEncDriver) encodeFloat64(f float64) {
- e.w.writen1(mpDouble)
- e.w.writeUint64(math.Float64bits(f))
-}
-
-func (e *msgpackEncDriver) encodeExtPreamble(xtag byte, l int) {
- switch {
- case l == 1:
- e.w.writen2(mpFixExt1, xtag)
- case l == 2:
- e.w.writen2(mpFixExt2, xtag)
- case l == 4:
- e.w.writen2(mpFixExt4, xtag)
- case l == 8:
- e.w.writen2(mpFixExt8, xtag)
- case l == 16:
- e.w.writen2(mpFixExt16, xtag)
- case l < 256:
- e.w.writen2(mpExt8, byte(l))
- e.w.writen1(xtag)
- case l < 65536:
- e.w.writen1(mpExt16)
- e.w.writeUint16(uint16(l))
- e.w.writen1(xtag)
- default:
- e.w.writen1(mpExt32)
- e.w.writeUint32(uint32(l))
- e.w.writen1(xtag)
- }
-}
-
-func (e *msgpackEncDriver) encodeArrayPreamble(length int) {
- e.writeContainerLen(msgpackContainerList, length)
-}
-
-func (e *msgpackEncDriver) encodeMapPreamble(length int) {
- e.writeContainerLen(msgpackContainerMap, length)
-}
-
-func (e *msgpackEncDriver) encodeString(c charEncoding, s string) {
- if c == c_RAW && e.h.WriteExt {
- e.writeContainerLen(msgpackContainerBin, len(s))
- } else {
- e.writeContainerLen(msgpackContainerStr, len(s))
- }
- if len(s) > 0 {
- e.w.writestr(s)
- }
-}
-
-func (e *msgpackEncDriver) encodeSymbol(v string) {
- e.encodeString(c_UTF8, v)
-}
-
-func (e *msgpackEncDriver) encodeStringBytes(c charEncoding, bs []byte) {
- if c == c_RAW && e.h.WriteExt {
- e.writeContainerLen(msgpackContainerBin, len(bs))
- } else {
- e.writeContainerLen(msgpackContainerStr, len(bs))
- }
- if len(bs) > 0 {
- e.w.writeb(bs)
- }
-}
-
-func (e *msgpackEncDriver) writeContainerLen(ct msgpackContainerType, l int) {
- switch {
- case ct.hasFixMin && l < ct.fixCutoff:
- e.w.writen1(ct.bFixMin | byte(l))
- case ct.has8 && l < 256 && (ct.has8Always || e.h.WriteExt):
- e.w.writen2(ct.b8, uint8(l))
- case l < 65536:
- e.w.writen1(ct.b16)
- e.w.writeUint16(uint16(l))
- default:
- e.w.writen1(ct.b32)
- e.w.writeUint32(uint32(l))
- }
-}
-
-//---------------------------------------------
-
-type msgpackDecDriver struct {
- r decReader
- h *MsgpackHandle
- bd byte
- bdRead bool
- bdType valueType
-}
-
-func (d *msgpackDecDriver) isBuiltinType(rt uintptr) bool {
- //no builtin types. All encodings are based on kinds. Types supported as extensions.
- return false
-}
-
-func (d *msgpackDecDriver) decodeBuiltin(rt uintptr, v interface{}) {}
-
-// Note: This returns either a primitive (int, bool, etc) for non-containers,
-// or a containerType, or a specific type denoting nil or extension.
-// It is called when a nil interface{} is passed, leaving it up to the DecDriver
-// to introspect the stream and decide how best to decode.
-// It deciphers the value by looking at the stream first.
-func (d *msgpackDecDriver) decodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
- d.initReadNext()
- bd := d.bd
-
- switch bd {
- case mpNil:
- vt = valueTypeNil
- d.bdRead = false
- case mpFalse:
- vt = valueTypeBool
- v = false
- case mpTrue:
- vt = valueTypeBool
- v = true
-
- case mpFloat:
- vt = valueTypeFloat
- v = float64(math.Float32frombits(d.r.readUint32()))
- case mpDouble:
- vt = valueTypeFloat
- v = math.Float64frombits(d.r.readUint64())
-
- case mpUint8:
- vt = valueTypeUint
- v = uint64(d.r.readn1())
- case mpUint16:
- vt = valueTypeUint
- v = uint64(d.r.readUint16())
- case mpUint32:
- vt = valueTypeUint
- v = uint64(d.r.readUint32())
- case mpUint64:
- vt = valueTypeUint
- v = uint64(d.r.readUint64())
-
- case mpInt8:
- vt = valueTypeInt
- v = int64(int8(d.r.readn1()))
- case mpInt16:
- vt = valueTypeInt
- v = int64(int16(d.r.readUint16()))
- case mpInt32:
- vt = valueTypeInt
- v = int64(int32(d.r.readUint32()))
- case mpInt64:
- vt = valueTypeInt
- v = int64(int64(d.r.readUint64()))
-
- default:
- switch {
- case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
- // positive fixnum (always signed)
- vt = valueTypeInt
- v = int64(int8(bd))
- case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
- // negative fixnum
- vt = valueTypeInt
- v = int64(int8(bd))
- case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
- if d.h.RawToString {
- var rvm string
- vt = valueTypeString
- v = &rvm
- } else {
- var rvm = []byte{}
- vt = valueTypeBytes
- v = &rvm
- }
- decodeFurther = true
- case bd == mpBin8, bd == mpBin16, bd == mpBin32:
- var rvm = []byte{}
- vt = valueTypeBytes
- v = &rvm
- decodeFurther = true
- case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
- vt = valueTypeArray
- decodeFurther = true
- case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
- vt = valueTypeMap
- decodeFurther = true
- case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
- clen := d.readExtLen()
- var re RawExt
- re.Tag = d.r.readn1()
- re.Data = d.r.readn(clen)
- v = &re
- vt = valueTypeExt
- default:
- decErr("Nil-Deciphered DecodeValue: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
- }
- }
- if !decodeFurther {
- d.bdRead = false
- }
- return
-}
-
-// int can be decoded from msgpack type: intXXX or uintXXX
-func (d *msgpackDecDriver) decodeInt(bitsize uint8) (i int64) {
- switch d.bd {
- case mpUint8:
- i = int64(uint64(d.r.readn1()))
- case mpUint16:
- i = int64(uint64(d.r.readUint16()))
- case mpUint32:
- i = int64(uint64(d.r.readUint32()))
- case mpUint64:
- i = int64(d.r.readUint64())
- case mpInt8:
- i = int64(int8(d.r.readn1()))
- case mpInt16:
- i = int64(int16(d.r.readUint16()))
- case mpInt32:
- i = int64(int32(d.r.readUint32()))
- case mpInt64:
- i = int64(d.r.readUint64())
- default:
- switch {
- case d.bd >= mpPosFixNumMin && d.bd <= mpPosFixNumMax:
- i = int64(int8(d.bd))
- case d.bd >= mpNegFixNumMin && d.bd <= mpNegFixNumMax:
- i = int64(int8(d.bd))
- default:
- decErr("Unhandled single-byte unsigned integer value: %s: %x", msgBadDesc, d.bd)
- }
- }
- // check overflow (logic adapted from std pkg reflect/value.go OverflowUint()
- if bitsize > 0 {
- if trunc := (i << (64 - bitsize)) >> (64 - bitsize); i != trunc {
- decErr("Overflow int value: %v", i)
- }
- }
- d.bdRead = false
- return
-}
-
-// uint can be decoded from msgpack type: intXXX or uintXXX
-func (d *msgpackDecDriver) decodeUint(bitsize uint8) (ui uint64) {
- switch d.bd {
- case mpUint8:
- ui = uint64(d.r.readn1())
- case mpUint16:
- ui = uint64(d.r.readUint16())
- case mpUint32:
- ui = uint64(d.r.readUint32())
- case mpUint64:
- ui = d.r.readUint64()
- case mpInt8:
- if i := int64(int8(d.r.readn1())); i >= 0 {
- ui = uint64(i)
- } else {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- case mpInt16:
- if i := int64(int16(d.r.readUint16())); i >= 0 {
- ui = uint64(i)
- } else {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- case mpInt32:
- if i := int64(int32(d.r.readUint32())); i >= 0 {
- ui = uint64(i)
- } else {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- case mpInt64:
- if i := int64(d.r.readUint64()); i >= 0 {
- ui = uint64(i)
- } else {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- default:
- switch {
- case d.bd >= mpPosFixNumMin && d.bd <= mpPosFixNumMax:
- ui = uint64(d.bd)
- case d.bd >= mpNegFixNumMin && d.bd <= mpNegFixNumMax:
- decErr("Assigning negative signed value: %v, to unsigned type", int(d.bd))
- default:
- decErr("Unhandled single-byte unsigned integer value: %s: %x", msgBadDesc, d.bd)
- }
- }
- // check overflow (logic adapted from std pkg reflect/value.go OverflowUint()
- if bitsize > 0 {
- if trunc := (ui << (64 - bitsize)) >> (64 - bitsize); ui != trunc {
- decErr("Overflow uint value: %v", ui)
- }
- }
- d.bdRead = false
- return
-}
-
-// float can either be decoded from msgpack type: float, double or intX
-func (d *msgpackDecDriver) decodeFloat(chkOverflow32 bool) (f float64) {
- switch d.bd {
- case mpFloat:
- f = float64(math.Float32frombits(d.r.readUint32()))
- case mpDouble:
- f = math.Float64frombits(d.r.readUint64())
- default:
- f = float64(d.decodeInt(0))
- }
- checkOverflowFloat32(f, chkOverflow32)
- d.bdRead = false
- return
-}
-
-// bool can be decoded from bool, fixnum 0 or 1.
-func (d *msgpackDecDriver) decodeBool() (b bool) {
- switch d.bd {
- case mpFalse, 0:
- // b = false
- case mpTrue, 1:
- b = true
- default:
- decErr("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
- }
- d.bdRead = false
- return
-}
-
-func (d *msgpackDecDriver) decodeString() (s string) {
- clen := d.readContainerLen(msgpackContainerStr)
- if clen > 0 {
- s = string(d.r.readn(clen))
- }
- d.bdRead = false
- return
-}
-
-// Callers must check if changed=true (to decide whether to replace the one they have)
-func (d *msgpackDecDriver) decodeBytes(bs []byte) (bsOut []byte, changed bool) {
- // bytes can be decoded from msgpackContainerStr or msgpackContainerBin
- var clen int
- switch d.bd {
- case mpBin8, mpBin16, mpBin32:
- clen = d.readContainerLen(msgpackContainerBin)
- default:
- clen = d.readContainerLen(msgpackContainerStr)
- }
- // if clen < 0 {
- // changed = true
- // panic("length cannot be zero. this cannot be nil.")
- // }
- if clen > 0 {
- // if no contents in stream, don't update the passed byteslice
- if len(bs) != clen {
- // Return changed=true if length of passed slice diff from length of bytes in stream
- if len(bs) > clen {
- bs = bs[:clen]
- } else {
- bs = make([]byte, clen)
- }
- bsOut = bs
- changed = true
- }
- d.r.readb(bs)
- }
- d.bdRead = false
- return
-}
-
-// Every top-level decode funcs (i.e. decodeValue, decode) must call this first.
-func (d *msgpackDecDriver) initReadNext() {
- if d.bdRead {
- return
- }
- d.bd = d.r.readn1()
- d.bdRead = true
- d.bdType = valueTypeUnset
-}
-
-func (d *msgpackDecDriver) currentEncodedType() valueType {
- if d.bdType == valueTypeUnset {
- bd := d.bd
- switch bd {
- case mpNil:
- d.bdType = valueTypeNil
- case mpFalse, mpTrue:
- d.bdType = valueTypeBool
- case mpFloat, mpDouble:
- d.bdType = valueTypeFloat
- case mpUint8, mpUint16, mpUint32, mpUint64:
- d.bdType = valueTypeUint
- case mpInt8, mpInt16, mpInt32, mpInt64:
- d.bdType = valueTypeInt
- default:
- switch {
- case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
- d.bdType = valueTypeInt
- case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
- d.bdType = valueTypeInt
- case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
- if d.h.RawToString {
- d.bdType = valueTypeString
- } else {
- d.bdType = valueTypeBytes
- }
- case bd == mpBin8, bd == mpBin16, bd == mpBin32:
- d.bdType = valueTypeBytes
- case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
- d.bdType = valueTypeArray
- case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
- d.bdType = valueTypeMap
- case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
- d.bdType = valueTypeExt
- default:
- decErr("currentEncodedType: Undeciphered descriptor: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
- }
- }
- }
- return d.bdType
-}
-
-func (d *msgpackDecDriver) tryDecodeAsNil() bool {
- if d.bd == mpNil {
- d.bdRead = false
- return true
- }
- return false
-}
-
-func (d *msgpackDecDriver) readContainerLen(ct msgpackContainerType) (clen int) {
- bd := d.bd
- switch {
- case bd == mpNil:
- clen = -1 // to represent nil
- case bd == ct.b8:
- clen = int(d.r.readn1())
- case bd == ct.b16:
- clen = int(d.r.readUint16())
- case bd == ct.b32:
- clen = int(d.r.readUint32())
- case (ct.bFixMin & bd) == ct.bFixMin:
- clen = int(ct.bFixMin ^ bd)
- default:
- decErr("readContainerLen: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
- }
- d.bdRead = false
- return
-}
-
-func (d *msgpackDecDriver) readMapLen() int {
- return d.readContainerLen(msgpackContainerMap)
-}
-
-func (d *msgpackDecDriver) readArrayLen() int {
- return d.readContainerLen(msgpackContainerList)
-}
-
-func (d *msgpackDecDriver) readExtLen() (clen int) {
- switch d.bd {
- case mpNil:
- clen = -1 // to represent nil
- case mpFixExt1:
- clen = 1
- case mpFixExt2:
- clen = 2
- case mpFixExt4:
- clen = 4
- case mpFixExt8:
- clen = 8
- case mpFixExt16:
- clen = 16
- case mpExt8:
- clen = int(d.r.readn1())
- case mpExt16:
- clen = int(d.r.readUint16())
- case mpExt32:
- clen = int(d.r.readUint32())
- default:
- decErr("decoding ext bytes: found unexpected byte: %x", d.bd)
- }
- return
-}
-
-func (d *msgpackDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
- xbd := d.bd
- switch {
- case xbd == mpBin8, xbd == mpBin16, xbd == mpBin32:
- xbs, _ = d.decodeBytes(nil)
- case xbd == mpStr8, xbd == mpStr16, xbd == mpStr32,
- xbd >= mpFixStrMin && xbd <= mpFixStrMax:
- xbs = []byte(d.decodeString())
- default:
- clen := d.readExtLen()
- xtag = d.r.readn1()
- if verifyTag && xtag != tag {
- decErr("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
- }
- xbs = d.r.readn(clen)
- }
- d.bdRead = false
- return
-}
-
-//--------------------------------------------------
-
-//MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.
-type MsgpackHandle struct {
- BasicHandle
-
- // RawToString controls how raw bytes are decoded into a nil interface{}.
- RawToString bool
- // WriteExt flag supports encoding configured extensions with extension tags.
- // It also controls whether other elements of the new spec are encoded (ie Str8).
- //
- // With WriteExt=false, configured extensions are serialized as raw bytes
- // and Str8 is not encoded.
- //
- // A stream can still be decoded into a typed value, provided an appropriate value
- // is provided, but the type cannot be inferred from the stream. If no appropriate
- // type is provided (e.g. decoding into a nil interface{}), you get back
- // a []byte or string based on the setting of RawToString.
- WriteExt bool
-}
-
-func (h *MsgpackHandle) newEncDriver(w encWriter) encDriver {
- return &msgpackEncDriver{w: w, h: h}
-}
-
-func (h *MsgpackHandle) newDecDriver(r decReader) decDriver {
- return &msgpackDecDriver{r: r, h: h}
-}
-
-func (h *MsgpackHandle) writeExt() bool {
- return h.WriteExt
-}
-
-func (h *MsgpackHandle) getBasicHandle() *BasicHandle {
- return &h.BasicHandle
-}
-
-//--------------------------------------------------
-
-type msgpackSpecRpcCodec struct {
- rpcCodec
-}
-
-// /////////////// Spec RPC Codec ///////////////////
-func (c *msgpackSpecRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
- // WriteRequest can write to both a Go service, and other services that do
- // not abide by the 1 argument rule of a Go service.
- // We discriminate based on if the body is a MsgpackSpecRpcMultiArgs
- var bodyArr []interface{}
- if m, ok := body.(MsgpackSpecRpcMultiArgs); ok {
- bodyArr = ([]interface{})(m)
- } else {
- bodyArr = []interface{}{body}
- }
- r2 := []interface{}{0, uint32(r.Seq), r.ServiceMethod, bodyArr}
- return c.write(r2, nil, false, true)
-}
-
-func (c *msgpackSpecRpcCodec) WriteResponse(r *rpc.Response, body interface{}) error {
- var moe interface{}
- if r.Error != "" {
- moe = r.Error
- }
- if moe != nil && body != nil {
- body = nil
- }
- r2 := []interface{}{1, uint32(r.Seq), moe, body}
- return c.write(r2, nil, false, true)
-}
-
-func (c *msgpackSpecRpcCodec) ReadResponseHeader(r *rpc.Response) error {
- return c.parseCustomHeader(1, &r.Seq, &r.Error)
-}
-
-func (c *msgpackSpecRpcCodec) ReadRequestHeader(r *rpc.Request) error {
- return c.parseCustomHeader(0, &r.Seq, &r.ServiceMethod)
-}
-
-func (c *msgpackSpecRpcCodec) ReadRequestBody(body interface{}) error {
- if body == nil { // read and discard
- return c.read(nil)
- }
- bodyArr := []interface{}{body}
- return c.read(&bodyArr)
-}
-
-func (c *msgpackSpecRpcCodec) parseCustomHeader(expectTypeByte byte, msgid *uint64, methodOrError *string) (err error) {
-
- if c.cls {
- return io.EOF
- }
-
- // We read the response header by hand
- // so that the body can be decoded on its own from the stream at a later time.
-
- const fia byte = 0x94 //four item array descriptor value
- // Not sure why the panic of EOF is swallowed above.
- // if bs1 := c.dec.r.readn1(); bs1 != fia {
- // err = fmt.Errorf("Unexpected value for array descriptor: Expecting %v. Received %v", fia, bs1)
- // return
- // }
- var b byte
- b, err = c.br.ReadByte()
- if err != nil {
- return
- }
- if b != fia {
- err = fmt.Errorf("Unexpected value for array descriptor: Expecting %v. Received %v", fia, b)
- return
- }
-
- if err = c.read(&b); err != nil {
- return
- }
- if b != expectTypeByte {
- err = fmt.Errorf("Unexpected byte descriptor in header. Expecting %v. Received %v", expectTypeByte, b)
- return
- }
- if err = c.read(msgid); err != nil {
- return
- }
- if err = c.read(methodOrError); err != nil {
- return
- }
- return
-}
-
-//--------------------------------------------------
-
-// msgpackSpecRpc is the implementation of Rpc that uses custom communication protocol
-// as defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
-type msgpackSpecRpc struct{}
-
-// MsgpackSpecRpc implements Rpc using the communication protocol defined in
-// the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .
-// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
-var MsgpackSpecRpc msgpackSpecRpc
-
-func (x msgpackSpecRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
- return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
-}
-
-func (x msgpackSpecRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
- return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
-}
-
-var _ decDriver = (*msgpackDecDriver)(nil)
-var _ encDriver = (*msgpackEncDriver)(nil)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack_test.py b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack_test.py
deleted file mode 100755
index e933838c..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/msgpack_test.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#!/usr/bin/env python
-
-# This will create golden files in a directory passed to it.
-# A Test calls this internally to create the golden files
-# So it can process them (so we don't have to checkin the files).
-
-import msgpack, msgpackrpc, sys, os, threading
-
-def get_test_data_list():
- # get list with all primitive types, and a combo type
- l0 = [
- -8,
- -1616,
- -32323232,
- -6464646464646464,
- 192,
- 1616,
- 32323232,
- 6464646464646464,
- 192,
- -3232.0,
- -6464646464.0,
- 3232.0,
- 6464646464.0,
- False,
- True,
- None,
- "someday",
- "",
- "bytestring",
- 1328176922000002000,
- -2206187877999998000,
- 0,
- -6795364578871345152
- ]
- l1 = [
- { "true": True,
- "false": False },
- { "true": "True",
- "false": False,
- "uint16(1616)": 1616 },
- { "list": [1616, 32323232, True, -3232.0, {"TRUE":True, "FALSE":False}, [True, False] ],
- "int32":32323232, "bool": True,
- "LONG STRING": "123456789012345678901234567890123456789012345678901234567890",
- "SHORT STRING": "1234567890" },
- { True: "true", 8: False, "false": 0 }
- ]
-
- l = []
- l.extend(l0)
- l.append(l0)
- l.extend(l1)
- return l
-
-def build_test_data(destdir):
- l = get_test_data_list()
- for i in range(len(l)):
- packer = msgpack.Packer()
- serialized = packer.pack(l[i])
- f = open(os.path.join(destdir, str(i) + '.golden'), 'wb')
- f.write(serialized)
- f.close()
-
-def doRpcServer(port, stopTimeSec):
- class EchoHandler(object):
- def Echo123(self, msg1, msg2, msg3):
- return ("1:%s 2:%s 3:%s" % (msg1, msg2, msg3))
- def EchoStruct(self, msg):
- return ("%s" % msg)
-
- addr = msgpackrpc.Address('localhost', port)
- server = msgpackrpc.Server(EchoHandler())
- server.listen(addr)
- # run thread to stop it after stopTimeSec seconds if > 0
- if stopTimeSec > 0:
- def myStopRpcServer():
- server.stop()
- t = threading.Timer(stopTimeSec, myStopRpcServer)
- t.start()
- server.start()
-
-def doRpcClientToPythonSvc(port):
- address = msgpackrpc.Address('localhost', port)
- client = msgpackrpc.Client(address, unpack_encoding='utf-8')
- print client.call("Echo123", "A1", "B2", "C3")
- print client.call("EchoStruct", {"A" :"Aa", "B":"Bb", "C":"Cc"})
-
-def doRpcClientToGoSvc(port):
- # print ">>>> port: ", port, " <<<<<"
- address = msgpackrpc.Address('localhost', port)
- client = msgpackrpc.Client(address, unpack_encoding='utf-8')
- print client.call("TestRpcInt.Echo123", ["A1", "B2", "C3"])
- print client.call("TestRpcInt.EchoStruct", {"A" :"Aa", "B":"Bb", "C":"Cc"})
-
-def doMain(args):
- if len(args) == 2 and args[0] == "testdata":
- build_test_data(args[1])
- elif len(args) == 3 and args[0] == "rpc-server":
- doRpcServer(int(args[1]), int(args[2]))
- elif len(args) == 2 and args[0] == "rpc-client-python-service":
- doRpcClientToPythonSvc(int(args[1]))
- elif len(args) == 2 and args[0] == "rpc-client-go-service":
- doRpcClientToGoSvc(int(args[1]))
- else:
- print("Usage: msgpack_test.py " +
- "[testdata|rpc-server|rpc-client-python-service|rpc-client-go-service] ...")
-
-if __name__ == "__main__":
- doMain(sys.argv[1:])
-
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/rpc.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/rpc.go
deleted file mode 100644
index d014dbdc..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/rpc.go
+++ /dev/null
@@ -1,152 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import (
- "bufio"
- "io"
- "net/rpc"
- "sync"
-)
-
-// Rpc provides a rpc Server or Client Codec for rpc communication.
-type Rpc interface {
- ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec
- ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec
-}
-
-// RpcCodecBuffered allows access to the underlying bufio.Reader/Writer
-// used by the rpc connection. It accomodates use-cases where the connection
-// should be used by rpc and non-rpc functions, e.g. streaming a file after
-// sending an rpc response.
-type RpcCodecBuffered interface {
- BufferedReader() *bufio.Reader
- BufferedWriter() *bufio.Writer
-}
-
-// -------------------------------------
-
-// rpcCodec defines the struct members and common methods.
-type rpcCodec struct {
- rwc io.ReadWriteCloser
- dec *Decoder
- enc *Encoder
- bw *bufio.Writer
- br *bufio.Reader
- mu sync.Mutex
- cls bool
-}
-
-func newRPCCodec(conn io.ReadWriteCloser, h Handle) rpcCodec {
- bw := bufio.NewWriter(conn)
- br := bufio.NewReader(conn)
- return rpcCodec{
- rwc: conn,
- bw: bw,
- br: br,
- enc: NewEncoder(bw, h),
- dec: NewDecoder(br, h),
- }
-}
-
-func (c *rpcCodec) BufferedReader() *bufio.Reader {
- return c.br
-}
-
-func (c *rpcCodec) BufferedWriter() *bufio.Writer {
- return c.bw
-}
-
-func (c *rpcCodec) write(obj1, obj2 interface{}, writeObj2, doFlush bool) (err error) {
- if c.cls {
- return io.EOF
- }
- if err = c.enc.Encode(obj1); err != nil {
- return
- }
- if writeObj2 {
- if err = c.enc.Encode(obj2); err != nil {
- return
- }
- }
- if doFlush && c.bw != nil {
- return c.bw.Flush()
- }
- return
-}
-
-func (c *rpcCodec) read(obj interface{}) (err error) {
- if c.cls {
- return io.EOF
- }
- //If nil is passed in, we should still attempt to read content to nowhere.
- if obj == nil {
- var obj2 interface{}
- return c.dec.Decode(&obj2)
- }
- return c.dec.Decode(obj)
-}
-
-func (c *rpcCodec) Close() error {
- if c.cls {
- return io.EOF
- }
- c.cls = true
- return c.rwc.Close()
-}
-
-func (c *rpcCodec) ReadResponseBody(body interface{}) error {
- return c.read(body)
-}
-
-// -------------------------------------
-
-type goRpcCodec struct {
- rpcCodec
-}
-
-func (c *goRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
- // Must protect for concurrent access as per API
- c.mu.Lock()
- defer c.mu.Unlock()
- return c.write(r, body, true, true)
-}
-
-func (c *goRpcCodec) WriteResponse(r *rpc.Response, body interface{}) error {
- c.mu.Lock()
- defer c.mu.Unlock()
- return c.write(r, body, true, true)
-}
-
-func (c *goRpcCodec) ReadResponseHeader(r *rpc.Response) error {
- return c.read(r)
-}
-
-func (c *goRpcCodec) ReadRequestHeader(r *rpc.Request) error {
- return c.read(r)
-}
-
-func (c *goRpcCodec) ReadRequestBody(body interface{}) error {
- return c.read(body)
-}
-
-// -------------------------------------
-
-// goRpc is the implementation of Rpc that uses the communication protocol
-// as defined in net/rpc package.
-type goRpc struct{}
-
-// GoRpc implements Rpc using the communication protocol defined in net/rpc package.
-// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
-var GoRpc goRpc
-
-func (x goRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
- return &goRpcCodec{newRPCCodec(conn, h)}
-}
-
-func (x goRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
- return &goRpcCodec{newRPCCodec(conn, h)}
-}
-
-var _ RpcCodecBuffered = (*rpcCodec)(nil) // ensure *rpcCodec implements RpcCodecBuffered
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/simple.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/simple.go
deleted file mode 100644
index 9e4d148a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/simple.go
+++ /dev/null
@@ -1,461 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import "math"
-
-const (
- _ uint8 = iota
- simpleVdNil = 1
- simpleVdFalse = 2
- simpleVdTrue = 3
- simpleVdFloat32 = 4
- simpleVdFloat64 = 5
-
- // each lasts for 4 (ie n, n+1, n+2, n+3)
- simpleVdPosInt = 8
- simpleVdNegInt = 12
-
- // containers: each lasts for 4 (ie n, n+1, n+2, ... n+7)
- simpleVdString = 216
- simpleVdByteArray = 224
- simpleVdArray = 232
- simpleVdMap = 240
- simpleVdExt = 248
-)
-
-type simpleEncDriver struct {
- h *SimpleHandle
- w encWriter
- //b [8]byte
-}
-
-func (e *simpleEncDriver) isBuiltinType(rt uintptr) bool {
- return false
-}
-
-func (e *simpleEncDriver) encodeBuiltin(rt uintptr, v interface{}) {
-}
-
-func (e *simpleEncDriver) encodeNil() {
- e.w.writen1(simpleVdNil)
-}
-
-func (e *simpleEncDriver) encodeBool(b bool) {
- if b {
- e.w.writen1(simpleVdTrue)
- } else {
- e.w.writen1(simpleVdFalse)
- }
-}
-
-func (e *simpleEncDriver) encodeFloat32(f float32) {
- e.w.writen1(simpleVdFloat32)
- e.w.writeUint32(math.Float32bits(f))
-}
-
-func (e *simpleEncDriver) encodeFloat64(f float64) {
- e.w.writen1(simpleVdFloat64)
- e.w.writeUint64(math.Float64bits(f))
-}
-
-func (e *simpleEncDriver) encodeInt(v int64) {
- if v < 0 {
- e.encUint(uint64(-v), simpleVdNegInt)
- } else {
- e.encUint(uint64(v), simpleVdPosInt)
- }
-}
-
-func (e *simpleEncDriver) encodeUint(v uint64) {
- e.encUint(v, simpleVdPosInt)
-}
-
-func (e *simpleEncDriver) encUint(v uint64, bd uint8) {
- switch {
- case v <= math.MaxUint8:
- e.w.writen2(bd, uint8(v))
- case v <= math.MaxUint16:
- e.w.writen1(bd + 1)
- e.w.writeUint16(uint16(v))
- case v <= math.MaxUint32:
- e.w.writen1(bd + 2)
- e.w.writeUint32(uint32(v))
- case v <= math.MaxUint64:
- e.w.writen1(bd + 3)
- e.w.writeUint64(v)
- }
-}
-
-func (e *simpleEncDriver) encLen(bd byte, length int) {
- switch {
- case length == 0:
- e.w.writen1(bd)
- case length <= math.MaxUint8:
- e.w.writen1(bd + 1)
- e.w.writen1(uint8(length))
- case length <= math.MaxUint16:
- e.w.writen1(bd + 2)
- e.w.writeUint16(uint16(length))
- case int64(length) <= math.MaxUint32:
- e.w.writen1(bd + 3)
- e.w.writeUint32(uint32(length))
- default:
- e.w.writen1(bd + 4)
- e.w.writeUint64(uint64(length))
- }
-}
-
-func (e *simpleEncDriver) encodeExtPreamble(xtag byte, length int) {
- e.encLen(simpleVdExt, length)
- e.w.writen1(xtag)
-}
-
-func (e *simpleEncDriver) encodeArrayPreamble(length int) {
- e.encLen(simpleVdArray, length)
-}
-
-func (e *simpleEncDriver) encodeMapPreamble(length int) {
- e.encLen(simpleVdMap, length)
-}
-
-func (e *simpleEncDriver) encodeString(c charEncoding, v string) {
- e.encLen(simpleVdString, len(v))
- e.w.writestr(v)
-}
-
-func (e *simpleEncDriver) encodeSymbol(v string) {
- e.encodeString(c_UTF8, v)
-}
-
-func (e *simpleEncDriver) encodeStringBytes(c charEncoding, v []byte) {
- e.encLen(simpleVdByteArray, len(v))
- e.w.writeb(v)
-}
-
-//------------------------------------
-
-type simpleDecDriver struct {
- h *SimpleHandle
- r decReader
- bdRead bool
- bdType valueType
- bd byte
- //b [8]byte
-}
-
-func (d *simpleDecDriver) initReadNext() {
- if d.bdRead {
- return
- }
- d.bd = d.r.readn1()
- d.bdRead = true
- d.bdType = valueTypeUnset
-}
-
-func (d *simpleDecDriver) currentEncodedType() valueType {
- if d.bdType == valueTypeUnset {
- switch d.bd {
- case simpleVdNil:
- d.bdType = valueTypeNil
- case simpleVdTrue, simpleVdFalse:
- d.bdType = valueTypeBool
- case simpleVdPosInt, simpleVdPosInt + 1, simpleVdPosInt + 2, simpleVdPosInt + 3:
- d.bdType = valueTypeUint
- case simpleVdNegInt, simpleVdNegInt + 1, simpleVdNegInt + 2, simpleVdNegInt + 3:
- d.bdType = valueTypeInt
- case simpleVdFloat32, simpleVdFloat64:
- d.bdType = valueTypeFloat
- case simpleVdString, simpleVdString + 1, simpleVdString + 2, simpleVdString + 3, simpleVdString + 4:
- d.bdType = valueTypeString
- case simpleVdByteArray, simpleVdByteArray + 1, simpleVdByteArray + 2, simpleVdByteArray + 3, simpleVdByteArray + 4:
- d.bdType = valueTypeBytes
- case simpleVdExt, simpleVdExt + 1, simpleVdExt + 2, simpleVdExt + 3, simpleVdExt + 4:
- d.bdType = valueTypeExt
- case simpleVdArray, simpleVdArray + 1, simpleVdArray + 2, simpleVdArray + 3, simpleVdArray + 4:
- d.bdType = valueTypeArray
- case simpleVdMap, simpleVdMap + 1, simpleVdMap + 2, simpleVdMap + 3, simpleVdMap + 4:
- d.bdType = valueTypeMap
- default:
- decErr("currentEncodedType: Unrecognized d.vd: 0x%x", d.bd)
- }
- }
- return d.bdType
-}
-
-func (d *simpleDecDriver) tryDecodeAsNil() bool {
- if d.bd == simpleVdNil {
- d.bdRead = false
- return true
- }
- return false
-}
-
-func (d *simpleDecDriver) isBuiltinType(rt uintptr) bool {
- return false
-}
-
-func (d *simpleDecDriver) decodeBuiltin(rt uintptr, v interface{}) {
-}
-
-func (d *simpleDecDriver) decIntAny() (ui uint64, i int64, neg bool) {
- switch d.bd {
- case simpleVdPosInt:
- ui = uint64(d.r.readn1())
- i = int64(ui)
- case simpleVdPosInt + 1:
- ui = uint64(d.r.readUint16())
- i = int64(ui)
- case simpleVdPosInt + 2:
- ui = uint64(d.r.readUint32())
- i = int64(ui)
- case simpleVdPosInt + 3:
- ui = uint64(d.r.readUint64())
- i = int64(ui)
- case simpleVdNegInt:
- ui = uint64(d.r.readn1())
- i = -(int64(ui))
- neg = true
- case simpleVdNegInt + 1:
- ui = uint64(d.r.readUint16())
- i = -(int64(ui))
- neg = true
- case simpleVdNegInt + 2:
- ui = uint64(d.r.readUint32())
- i = -(int64(ui))
- neg = true
- case simpleVdNegInt + 3:
- ui = uint64(d.r.readUint64())
- i = -(int64(ui))
- neg = true
- default:
- decErr("decIntAny: Integer only valid from pos/neg integer1..8. Invalid descriptor: %v", d.bd)
- }
- // don't do this check, because callers may only want the unsigned value.
- // if ui > math.MaxInt64 {
- // decErr("decIntAny: Integer out of range for signed int64: %v", ui)
- // }
- return
-}
-
-func (d *simpleDecDriver) decodeInt(bitsize uint8) (i int64) {
- _, i, _ = d.decIntAny()
- checkOverflow(0, i, bitsize)
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) decodeUint(bitsize uint8) (ui uint64) {
- ui, i, neg := d.decIntAny()
- if neg {
- decErr("Assigning negative signed value: %v, to unsigned type", i)
- }
- checkOverflow(ui, 0, bitsize)
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) decodeFloat(chkOverflow32 bool) (f float64) {
- switch d.bd {
- case simpleVdFloat32:
- f = float64(math.Float32frombits(d.r.readUint32()))
- case simpleVdFloat64:
- f = math.Float64frombits(d.r.readUint64())
- default:
- if d.bd >= simpleVdPosInt && d.bd <= simpleVdNegInt+3 {
- _, i, _ := d.decIntAny()
- f = float64(i)
- } else {
- decErr("Float only valid from float32/64: Invalid descriptor: %v", d.bd)
- }
- }
- checkOverflowFloat32(f, chkOverflow32)
- d.bdRead = false
- return
-}
-
-// bool can be decoded from bool only (single byte).
-func (d *simpleDecDriver) decodeBool() (b bool) {
- switch d.bd {
- case simpleVdTrue:
- b = true
- case simpleVdFalse:
- default:
- decErr("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
- }
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) readMapLen() (length int) {
- d.bdRead = false
- return d.decLen()
-}
-
-func (d *simpleDecDriver) readArrayLen() (length int) {
- d.bdRead = false
- return d.decLen()
-}
-
-func (d *simpleDecDriver) decLen() int {
- switch d.bd % 8 {
- case 0:
- return 0
- case 1:
- return int(d.r.readn1())
- case 2:
- return int(d.r.readUint16())
- case 3:
- ui := uint64(d.r.readUint32())
- checkOverflow(ui, 0, intBitsize)
- return int(ui)
- case 4:
- ui := d.r.readUint64()
- checkOverflow(ui, 0, intBitsize)
- return int(ui)
- }
- decErr("decLen: Cannot read length: bd%8 must be in range 0..4. Got: %d", d.bd%8)
- return -1
-}
-
-func (d *simpleDecDriver) decodeString() (s string) {
- s = string(d.r.readn(d.decLen()))
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) decodeBytes(bs []byte) (bsOut []byte, changed bool) {
- if clen := d.decLen(); clen > 0 {
- // if no contents in stream, don't update the passed byteslice
- if len(bs) != clen {
- if len(bs) > clen {
- bs = bs[:clen]
- } else {
- bs = make([]byte, clen)
- }
- bsOut = bs
- changed = true
- }
- d.r.readb(bs)
- }
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
- switch d.bd {
- case simpleVdExt, simpleVdExt + 1, simpleVdExt + 2, simpleVdExt + 3, simpleVdExt + 4:
- l := d.decLen()
- xtag = d.r.readn1()
- if verifyTag && xtag != tag {
- decErr("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
- }
- xbs = d.r.readn(l)
- case simpleVdByteArray, simpleVdByteArray + 1, simpleVdByteArray + 2, simpleVdByteArray + 3, simpleVdByteArray + 4:
- xbs, _ = d.decodeBytes(nil)
- default:
- decErr("Invalid d.vd for extensions (Expecting extensions or byte array). Got: 0x%x", d.bd)
- }
- d.bdRead = false
- return
-}
-
-func (d *simpleDecDriver) decodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
- d.initReadNext()
-
- switch d.bd {
- case simpleVdNil:
- vt = valueTypeNil
- case simpleVdFalse:
- vt = valueTypeBool
- v = false
- case simpleVdTrue:
- vt = valueTypeBool
- v = true
- case simpleVdPosInt, simpleVdPosInt + 1, simpleVdPosInt + 2, simpleVdPosInt + 3:
- vt = valueTypeUint
- ui, _, _ := d.decIntAny()
- v = ui
- case simpleVdNegInt, simpleVdNegInt + 1, simpleVdNegInt + 2, simpleVdNegInt + 3:
- vt = valueTypeInt
- _, i, _ := d.decIntAny()
- v = i
- case simpleVdFloat32:
- vt = valueTypeFloat
- v = d.decodeFloat(true)
- case simpleVdFloat64:
- vt = valueTypeFloat
- v = d.decodeFloat(false)
- case simpleVdString, simpleVdString + 1, simpleVdString + 2, simpleVdString + 3, simpleVdString + 4:
- vt = valueTypeString
- v = d.decodeString()
- case simpleVdByteArray, simpleVdByteArray + 1, simpleVdByteArray + 2, simpleVdByteArray + 3, simpleVdByteArray + 4:
- vt = valueTypeBytes
- v, _ = d.decodeBytes(nil)
- case simpleVdExt, simpleVdExt + 1, simpleVdExt + 2, simpleVdExt + 3, simpleVdExt + 4:
- vt = valueTypeExt
- l := d.decLen()
- var re RawExt
- re.Tag = d.r.readn1()
- re.Data = d.r.readn(l)
- v = &re
- vt = valueTypeExt
- case simpleVdArray, simpleVdArray + 1, simpleVdArray + 2, simpleVdArray + 3, simpleVdArray + 4:
- vt = valueTypeArray
- decodeFurther = true
- case simpleVdMap, simpleVdMap + 1, simpleVdMap + 2, simpleVdMap + 3, simpleVdMap + 4:
- vt = valueTypeMap
- decodeFurther = true
- default:
- decErr("decodeNaked: Unrecognized d.vd: 0x%x", d.bd)
- }
-
- if !decodeFurther {
- d.bdRead = false
- }
- return
-}
-
-//------------------------------------
-
-// SimpleHandle is a Handle for a very simple encoding format.
-//
-// simple is a simplistic codec similar to binc, but not as compact.
-// - Encoding of a value is always preceeded by the descriptor byte (bd)
-// - True, false, nil are encoded fully in 1 byte (the descriptor)
-// - Integers (intXXX, uintXXX) are encoded in 1, 2, 4 or 8 bytes (plus a descriptor byte).
-// There are positive (uintXXX and intXXX >= 0) and negative (intXXX < 0) integers.
-// - Floats are encoded in 4 or 8 bytes (plus a descriptor byte)
-// - Lenght of containers (strings, bytes, array, map, extensions)
-// are encoded in 0, 1, 2, 4 or 8 bytes.
-// Zero-length containers have no length encoded.
-// For others, the number of bytes is given by pow(2, bd%3)
-// - maps are encoded as [bd] [length] [[key][value]]...
-// - arrays are encoded as [bd] [length] [value]...
-// - extensions are encoded as [bd] [length] [tag] [byte]...
-// - strings/bytearrays are encoded as [bd] [length] [byte]...
-//
-// The full spec will be published soon.
-type SimpleHandle struct {
- BasicHandle
-}
-
-func (h *SimpleHandle) newEncDriver(w encWriter) encDriver {
- return &simpleEncDriver{w: w, h: h}
-}
-
-func (h *SimpleHandle) newDecDriver(r decReader) decDriver {
- return &simpleDecDriver{r: r, h: h}
-}
-
-func (_ *SimpleHandle) writeExt() bool {
- return true
-}
-
-func (h *SimpleHandle) getBasicHandle() *BasicHandle {
- return &h.BasicHandle
-}
-
-var _ decDriver = (*simpleDecDriver)(nil)
-var _ encDriver = (*simpleEncDriver)(nil)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/time.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/time.go
deleted file mode 100644
index c86d6532..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-msgpack/codec/time.go
+++ /dev/null
@@ -1,193 +0,0 @@
-// Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
-// Use of this source code is governed by a BSD-style license found in the LICENSE file.
-
-package codec
-
-import (
- "time"
-)
-
-var (
- timeDigits = [...]byte{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}
-)
-
-// EncodeTime encodes a time.Time as a []byte, including
-// information on the instant in time and UTC offset.
-//
-// Format Description
-//
-// A timestamp is composed of 3 components:
-//
-// - secs: signed integer representing seconds since unix epoch
-// - nsces: unsigned integer representing fractional seconds as a
-// nanosecond offset within secs, in the range 0 <= nsecs < 1e9
-// - tz: signed integer representing timezone offset in minutes east of UTC,
-// and a dst (daylight savings time) flag
-//
-// When encoding a timestamp, the first byte is the descriptor, which
-// defines which components are encoded and how many bytes are used to
-// encode secs and nsecs components. *If secs/nsecs is 0 or tz is UTC, it
-// is not encoded in the byte array explicitly*.
-//
-// Descriptor 8 bits are of the form `A B C DDD EE`:
-// A: Is secs component encoded? 1 = true
-// B: Is nsecs component encoded? 1 = true
-// C: Is tz component encoded? 1 = true
-// DDD: Number of extra bytes for secs (range 0-7).
-// If A = 1, secs encoded in DDD+1 bytes.
-// If A = 0, secs is not encoded, and is assumed to be 0.
-// If A = 1, then we need at least 1 byte to encode secs.
-// DDD says the number of extra bytes beyond that 1.
-// E.g. if DDD=0, then secs is represented in 1 byte.
-// if DDD=2, then secs is represented in 3 bytes.
-// EE: Number of extra bytes for nsecs (range 0-3).
-// If B = 1, nsecs encoded in EE+1 bytes (similar to secs/DDD above)
-//
-// Following the descriptor bytes, subsequent bytes are:
-//
-// secs component encoded in `DDD + 1` bytes (if A == 1)
-// nsecs component encoded in `EE + 1` bytes (if B == 1)
-// tz component encoded in 2 bytes (if C == 1)
-//
-// secs and nsecs components are integers encoded in a BigEndian
-// 2-complement encoding format.
-//
-// tz component is encoded as 2 bytes (16 bits). Most significant bit 15 to
-// Least significant bit 0 are described below:
-//
-// Timezone offset has a range of -12:00 to +14:00 (ie -720 to +840 minutes).
-// Bit 15 = have\_dst: set to 1 if we set the dst flag.
-// Bit 14 = dst\_on: set to 1 if dst is in effect at the time, or 0 if not.
-// Bits 13..0 = timezone offset in minutes. It is a signed integer in Big Endian format.
-//
-func encodeTime(t time.Time) []byte {
- //t := rv.Interface().(time.Time)
- tsecs, tnsecs := t.Unix(), t.Nanosecond()
- var (
- bd byte
- btmp [8]byte
- bs [16]byte
- i int = 1
- )
- l := t.Location()
- if l == time.UTC {
- l = nil
- }
- if tsecs != 0 {
- bd = bd | 0x80
- bigen.PutUint64(btmp[:], uint64(tsecs))
- f := pruneSignExt(btmp[:], tsecs >= 0)
- bd = bd | (byte(7-f) << 2)
- copy(bs[i:], btmp[f:])
- i = i + (8 - f)
- }
- if tnsecs != 0 {
- bd = bd | 0x40
- bigen.PutUint32(btmp[:4], uint32(tnsecs))
- f := pruneSignExt(btmp[:4], true)
- bd = bd | byte(3-f)
- copy(bs[i:], btmp[f:4])
- i = i + (4 - f)
- }
- if l != nil {
- bd = bd | 0x20
- // Note that Go Libs do not give access to dst flag.
- _, zoneOffset := t.Zone()
- //zoneName, zoneOffset := t.Zone()
- zoneOffset /= 60
- z := uint16(zoneOffset)
- bigen.PutUint16(btmp[:2], z)
- // clear dst flags
- bs[i] = btmp[0] & 0x3f
- bs[i+1] = btmp[1]
- i = i + 2
- }
- bs[0] = bd
- return bs[0:i]
-}
-
-// DecodeTime decodes a []byte into a time.Time.
-func decodeTime(bs []byte) (tt time.Time, err error) {
- bd := bs[0]
- var (
- tsec int64
- tnsec uint32
- tz uint16
- i byte = 1
- i2 byte
- n byte
- )
- if bd&(1<<7) != 0 {
- var btmp [8]byte
- n = ((bd >> 2) & 0x7) + 1
- i2 = i + n
- copy(btmp[8-n:], bs[i:i2])
- //if first bit of bs[i] is set, then fill btmp[0..8-n] with 0xff (ie sign extend it)
- if bs[i]&(1<<7) != 0 {
- copy(btmp[0:8-n], bsAll0xff)
- //for j,k := byte(0), 8-n; j < k; j++ { btmp[j] = 0xff }
- }
- i = i2
- tsec = int64(bigen.Uint64(btmp[:]))
- }
- if bd&(1<<6) != 0 {
- var btmp [4]byte
- n = (bd & 0x3) + 1
- i2 = i + n
- copy(btmp[4-n:], bs[i:i2])
- i = i2
- tnsec = bigen.Uint32(btmp[:])
- }
- if bd&(1<<5) == 0 {
- tt = time.Unix(tsec, int64(tnsec)).UTC()
- return
- }
- // In stdlib time.Parse, when a date is parsed without a zone name, it uses "" as zone name.
- // However, we need name here, so it can be shown when time is printed.
- // Zone name is in form: UTC-08:00.
- // Note that Go Libs do not give access to dst flag, so we ignore dst bits
-
- i2 = i + 2
- tz = bigen.Uint16(bs[i:i2])
- i = i2
- // sign extend sign bit into top 2 MSB (which were dst bits):
- if tz&(1<<13) == 0 { // positive
- tz = tz & 0x3fff //clear 2 MSBs: dst bits
- } else { // negative
- tz = tz | 0xc000 //set 2 MSBs: dst bits
- //tzname[3] = '-' (TODO: verify. this works here)
- }
- tzint := int16(tz)
- if tzint == 0 {
- tt = time.Unix(tsec, int64(tnsec)).UTC()
- } else {
- // For Go Time, do not use a descriptive timezone.
- // It's unnecessary, and makes it harder to do a reflect.DeepEqual.
- // The Offset already tells what the offset should be, if not on UTC and unknown zone name.
- // var zoneName = timeLocUTCName(tzint)
- tt = time.Unix(tsec, int64(tnsec)).In(time.FixedZone("", int(tzint)*60))
- }
- return
-}
-
-func timeLocUTCName(tzint int16) string {
- if tzint == 0 {
- return "UTC"
- }
- var tzname = []byte("UTC+00:00")
- //tzname := fmt.Sprintf("UTC%s%02d:%02d", tzsign, tz/60, tz%60) //perf issue using Sprintf. inline below.
- //tzhr, tzmin := tz/60, tz%60 //faster if u convert to int first
- var tzhr, tzmin int16
- if tzint < 0 {
- tzname[3] = '-' // (TODO: verify. this works here)
- tzhr, tzmin = -tzint/60, (-tzint)%60
- } else {
- tzhr, tzmin = tzint/60, tzint%60
- }
- tzname[4] = timeDigits[tzhr/10]
- tzname[5] = timeDigits[tzhr%10]
- tzname[7] = timeDigits[tzmin/10]
- tzname[8] = timeDigits[tzmin%10]
- return string(tzname)
- //return time.FixedZone(string(tzname), int(tzint)*60)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/README.md
index 2058cfb6..78d354ed 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/README.md
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/README.md
@@ -1,10 +1,9 @@
# Go Plugin System over RPC
`go-plugin` is a Go (golang) plugin system over RPC. It is the plugin system
-that has been in use by HashiCorp tooling for over 3 years. While initially
-created for [Packer](https://www.packer.io), it has since been used by
-[Terraform](https://www.terraform.io) and [Otto](https://www.ottoproject.io),
-with plans to also use it for [Nomad](https://www.nomadproject.io) and
+that has been in use by HashiCorp tooling for over 4 years. While initially
+created for [Packer](https://www.packer.io), it is additionally in use by
+[Terraform](https://www.terraform.io), [Nomad](https://www.nomadproject.io), and
[Vault](https://www.vaultproject.io).
While the plugin system is over RPC, it is currently only designed to work
@@ -24,6 +23,11 @@ interface as if it were going to run in the same process. For a plugin user:
you just use and call functions on an interface as if it were in the same
process. This plugin system handles the communication in between.
+**Cross-language support.** Plugins can be written (and consumed) by
+almost every major language. This library supports serving plugins via
+[gRPC](http://www.grpc.io). gRPC-based plugins enable plugins to be written
+in any language.
+
**Complex arguments and return values are supported.** This library
provides APIs for handling complex arguments and return values such
as interfaces, `io.Reader/Writer`, etc. We do this by giving you a library
@@ -37,7 +41,10 @@ and the plugin can call back into the host process.
**Built-in Logging.** Any plugins that use the `log` standard library
will have log data automatically sent to the host process. The host
process will mirror this output prefixed with the path to the plugin
-binary. This makes debugging with plugins simple.
+binary. This makes debugging with plugins simple. If the host system
+uses [hclog](https://github.com/hashicorp/go-hclog) then the log data
+will be structured. If the plugin also uses hclog, logs from the plugin
+will be sent to the host hclog and be structured.
**Protocol Versioning.** A very basic "protocol version" is supported that
can be incremented to invalidate any previous plugins. This is useful when
@@ -62,13 +69,18 @@ This requires the host/plugin to know this is possible and daemonize
properly. `NewClient` takes a `ReattachConfig` to determine if and how to
reattach.
+**Cryptographically Secure Plugins.** Plugins can be verified with an expected
+checksum and RPC communications can be configured to use TLS. The host process
+must be properly secured to protect this configuration.
+
## Architecture
The HashiCorp plugin system works by launching subprocesses and communicating
-over RPC (using standard `net/rpc`). A single connection is made between
-any plugin and the host process, and we use a
-[connection multiplexing](https://github.com/hashicorp/yamux)
-library to multiplex any other connections on top.
+over RPC (using standard `net/rpc` or [gRPC](http://www.grpc.io). A single
+connection is made between any plugin and the host process. For net/rpc-based
+plugins, we use a [connection multiplexing](https://github.com/hashicorp/yamux)
+library to multiplex any other connections on top. For gRPC-based plugins,
+the HTTP2 protocol handles multiplexing.
This architecture has a number of benefits:
@@ -76,8 +88,8 @@ This architecture has a number of benefits:
panic the plugin user.
* Plugins are very easy to write: just write a Go application and `go build`.
- Theoretically you could also use another language as long as it can
- communicate the Go `net/rpc` protocol but this hasn't yet been tried.
+ Or use any other language to write a gRPC server with a tiny amount of
+ boilerplate to support go-plugin.
* Plugins are very easy to install: just put the binary in a location where
the host will find it (depends on the host but this library also provides
@@ -85,8 +97,8 @@ This architecture has a number of benefits:
* Plugins can be relatively secure: The plugin only has access to the
interfaces and args given to it, not to the entire memory space of the
- process. More security features are planned (see the coming soon section
- below).
+ process. Additionally, go-plugin can communicate with the plugin over
+ TLS.
## Usage
@@ -97,10 +109,9 @@ high-level steps that must be done. Examples are available in the
1. Choose the interface(s) you want to expose for plugins.
2. For each interface, implement an implementation of that interface
- that communicates over an `*rpc.Client` (from the standard `net/rpc`
- package) for every function call. Likewise, implement the RPC server
- struct this communicates to which is then communicating to a real,
- concrete implementation.
+ that communicates over a `net/rpc` connection or other a
+ [gRPC](http://www.grpc.io) connection or both. You'll have to implement
+ both a client and server implementation.
3. Create a `Plugin` implementation that knows how to create the RPC
client/server for a given plugin type.
@@ -125,10 +136,6 @@ improvements we can make.
At this point in time, the roadmap for the plugin system is:
-**Cryptographically Secure Plugins.** We'll implement signing plugins
-and loading signed plugins in order to allow Vault to make use of multi-process
-in a secure way.
-
**Semantic Versioning.** Plugins will be able to implement a semantic version.
This plugin system will give host processes a system for constraining
versions. This is in addition to the protocol versioning already present
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/client.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/client.go
index 9f8a0f27..b912826b 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/client.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/client.go
@@ -2,8 +2,11 @@ package plugin
import (
"bufio"
+ "crypto/subtle"
+ "crypto/tls"
"errors"
"fmt"
+ "hash"
"io"
"io/ioutil"
"log"
@@ -17,6 +20,8 @@ import (
"sync/atomic"
"time"
"unicode"
+
+ hclog "github.com/hashicorp/go-hclog"
)
// If this is 1, then we've called CleanupClients. This can be used
@@ -35,6 +40,22 @@ var (
// ErrProcessNotFound is returned when a client is instantiated to
// reattach to an existing process and it isn't found.
ErrProcessNotFound = errors.New("Reattachment process not found")
+
+ // ErrChecksumsDoNotMatch is returned when binary's checksum doesn't match
+ // the one provided in the SecureConfig.
+ ErrChecksumsDoNotMatch = errors.New("checksums did not match")
+
+ // ErrSecureNoChecksum is returned when an empty checksum is provided to the
+ // SecureConfig.
+ ErrSecureConfigNoChecksum = errors.New("no checksum provided")
+
+ // ErrSecureNoHash is returned when a nil Hash object is provided to the
+ // SecureConfig.
+ ErrSecureConfigNoHash = errors.New("no hash implementation provided")
+
+ // ErrSecureConfigAndReattach is returned when both Reattach and
+ // SecureConfig are set.
+ ErrSecureConfigAndReattach = errors.New("only one of Reattach or SecureConfig can be set")
)
// Client handles the lifecycle of a plugin application. It launches
@@ -55,7 +76,9 @@ type Client struct {
l sync.Mutex
address net.Addr
process *os.Process
- client *RPCClient
+ client ClientProtocol
+ protocol Protocol
+ logger hclog.Logger
}
// ClientConfig is the configuration used to initialize a new
@@ -79,6 +102,13 @@ type ClientConfig struct {
Cmd *exec.Cmd
Reattach *ReattachConfig
+ // SecureConfig is configuration for verifying the integrity of the
+ // executable. It can not be used with Reattach.
+ SecureConfig *SecureConfig
+
+ // TLSConfig is used to enable TLS on the RPC client.
+ TLSConfig *tls.Config
+
// Managed represents if the client should be managed by the
// plugin package or not. If true, then by calling CleanupClients,
// it will automatically be cleaned up. Otherwise, the client
@@ -109,14 +139,74 @@ type ClientConfig struct {
// sync any of these streams.
SyncStdout io.Writer
SyncStderr io.Writer
+
+ // AllowedProtocols is a list of allowed protocols. If this isn't set,
+ // then only netrpc is allowed. This is so that older go-plugin systems
+ // can show friendly errors if they see a plugin with an unknown
+ // protocol.
+ //
+ // By setting this, you can cause an error immediately on plugin start
+ // if an unsupported protocol is used with a good error message.
+ //
+ // If this isn't set at all (nil value), then only net/rpc is accepted.
+ // This is done for legacy reasons. You must explicitly opt-in to
+ // new protocols.
+ AllowedProtocols []Protocol
+
+ // Logger is the logger that the client will used. If none is provided,
+ // it will default to hclog's default logger.
+ Logger hclog.Logger
}
// ReattachConfig is used to configure a client to reattach to an
// already-running plugin process. You can retrieve this information by
// calling ReattachConfig on Client.
type ReattachConfig struct {
- Addr net.Addr
- Pid int
+ Protocol Protocol
+ Addr net.Addr
+ Pid int
+}
+
+// SecureConfig is used to configure a client to verify the integrity of an
+// executable before running. It does this by verifying the checksum is
+// expected. Hash is used to specify the hashing method to use when checksumming
+// the file. The configuration is verified by the client by calling the
+// SecureConfig.Check() function.
+//
+// The host process should ensure the checksum was provided by a trusted and
+// authoritative source. The binary should be installed in such a way that it
+// can not be modified by an unauthorized user between the time of this check
+// and the time of execution.
+type SecureConfig struct {
+ Checksum []byte
+ Hash hash.Hash
+}
+
+// Check takes the filepath to an executable and returns true if the checksum of
+// the file matches the checksum provided in the SecureConfig.
+func (s *SecureConfig) Check(filePath string) (bool, error) {
+ if len(s.Checksum) == 0 {
+ return false, ErrSecureConfigNoChecksum
+ }
+
+ if s.Hash == nil {
+ return false, ErrSecureConfigNoHash
+ }
+
+ file, err := os.Open(filePath)
+ if err != nil {
+ return false, err
+ }
+ defer file.Close()
+
+ _, err = io.Copy(s.Hash, file)
+ if err != nil {
+ return false, err
+ }
+
+ sum := s.Hash.Sum(nil)
+
+ return subtle.ConstantTimeCompare(sum, s.Checksum) == 1, nil
}
// This makes sure all the managed subprocesses are killed and properly
@@ -174,7 +264,22 @@ func NewClient(config *ClientConfig) (c *Client) {
config.SyncStderr = ioutil.Discard
}
- c = &Client{config: config}
+ if config.AllowedProtocols == nil {
+ config.AllowedProtocols = []Protocol{ProtocolNetRPC}
+ }
+
+ if config.Logger == nil {
+ config.Logger = hclog.New(&hclog.LoggerOptions{
+ Output: hclog.DefaultOutput,
+ Level: hclog.Trace,
+ Name: "plugin",
+ })
+ }
+
+ c = &Client{
+ config: config,
+ logger: config.Logger,
+ }
if config.Managed {
managedClientsLock.Lock()
managedClients = append(managedClients, c)
@@ -184,11 +289,11 @@ func NewClient(config *ClientConfig) (c *Client) {
return
}
-// Client returns an RPC client for the plugin.
+// Client returns the protocol client for this connection.
//
-// Subsequent calls to this will return the same RPC client.
-func (c *Client) Client() (*RPCClient, error) {
- addr, err := c.Start()
+// Subsequent calls to this will return the same client.
+func (c *Client) Client() (ClientProtocol, error) {
+ _, err := c.Start()
if err != nil {
return nil, err
}
@@ -200,29 +305,18 @@ func (c *Client) Client() (*RPCClient, error) {
return c.client, nil
}
- // Connect to the client
- conn, err := net.Dial(addr.Network(), addr.String())
- if err != nil {
- return nil, err
- }
- if tcpConn, ok := conn.(*net.TCPConn); ok {
- // Make sure to set keep alive so that the connection doesn't die
- tcpConn.SetKeepAlive(true)
- }
+ switch c.protocol {
+ case ProtocolNetRPC:
+ c.client, err = newRPCClient(c)
- // Create the actual RPC client
- c.client, err = NewRPCClient(conn, c.config.Plugins)
- if err != nil {
- conn.Close()
- return nil, err
+ case ProtocolGRPC:
+ c.client, err = newGRPCClient(c)
+
+ default:
+ return nil, fmt.Errorf("unknown server protocol: %s", c.protocol)
}
- // Begin the stream syncing so that stdin, out, err work properly
- err = c.client.SyncStreams(
- c.config.SyncStdout,
- c.config.SyncStderr)
if err != nil {
- c.client.Close()
c.client = nil
return nil, err
}
@@ -274,8 +368,7 @@ func (c *Client) Kill() {
if err != nil {
// If there was an error just log it. We're going to force
// kill in a moment anyways.
- log.Printf(
- "[WARN] plugin: error closing client during Kill: %s", err)
+ c.logger.Warn("error closing client during Kill", "err", err)
}
}
}
@@ -318,9 +411,14 @@ func (c *Client) Start() (addr net.Addr, err error) {
{
cmdSet := c.config.Cmd != nil
attachSet := c.config.Reattach != nil
+ secureSet := c.config.SecureConfig != nil
if cmdSet == attachSet {
return nil, fmt.Errorf("Only one of Cmd or Reattach must be set")
}
+
+ if secureSet && attachSet {
+ return nil, ErrSecureConfigAndReattach
+ }
}
// Create the logging channel for when we kill
@@ -350,7 +448,7 @@ func (c *Client) Start() (addr net.Addr, err error) {
pidWait(pid)
// Log so we can see it
- log.Printf("[DEBUG] plugin: reattached plugin process exited\n")
+ c.logger.Debug("reattached plugin process exited")
// Mark it
c.l.Lock()
@@ -364,6 +462,11 @@ func (c *Client) Start() (addr net.Addr, err error) {
// Set the address and process
c.address = c.config.Reattach.Addr
c.process = p
+ c.protocol = c.config.Reattach.Protocol
+ if c.protocol == "" {
+ // Default the protocol to net/rpc for backwards compatibility
+ c.protocol = ProtocolNetRPC
+ }
return c.address, nil
}
@@ -384,7 +487,15 @@ func (c *Client) Start() (addr net.Addr, err error) {
cmd.Stderr = stderr_w
cmd.Stdout = stdout_w
- log.Printf("[DEBUG] plugin: starting plugin: %s %#v", cmd.Path, cmd.Args)
+ if c.config.SecureConfig != nil {
+ if ok, err := c.config.SecureConfig.Check(cmd.Path); err != nil {
+ return nil, fmt.Errorf("error verifying checksum: %s", err)
+ } else if !ok {
+ return nil, ErrChecksumsDoNotMatch
+ }
+ }
+
+ c.logger.Debug("starting plugin", "path", cmd.Path, "args", cmd.Args)
err = cmd.Start()
if err != nil {
return
@@ -418,7 +529,7 @@ func (c *Client) Start() (addr net.Addr, err error) {
cmd.Wait()
// Log and make sure to flush the logs write away
- log.Printf("[DEBUG] plugin: %s: plugin process exited\n", cmd.Path)
+ c.logger.Debug("plugin process exited", "path", cmd.Path)
os.Stderr.Sync()
// Mark that we exited
@@ -465,7 +576,7 @@ func (c *Client) Start() (addr net.Addr, err error) {
timeout := time.After(c.config.StartTimeout)
// Start looking for the address
- log.Printf("[DEBUG] plugin: waiting for RPC address for: %s", cmd.Path)
+ c.logger.Debug("waiting for RPC address", "path", cmd.Path)
select {
case <-timeout:
err = errors.New("timeout while waiting for plugin to start")
@@ -475,7 +586,7 @@ func (c *Client) Start() (addr net.Addr, err error) {
// Trim the line and split by "|" in order to get the parts of
// the output.
line := strings.TrimSpace(string(lineBytes))
- parts := strings.SplitN(line, "|", 4)
+ parts := strings.SplitN(line, "|", 6)
if len(parts) < 4 {
err = fmt.Errorf(
"Unrecognized remote plugin message: %s\n\n"+
@@ -525,6 +636,27 @@ func (c *Client) Start() (addr net.Addr, err error) {
default:
err = fmt.Errorf("Unknown address type: %s", parts[3])
}
+
+ // If we have a server type, then record that. We default to net/rpc
+ // for backwards compatibility.
+ c.protocol = ProtocolNetRPC
+ if len(parts) >= 5 {
+ c.protocol = Protocol(parts[4])
+ }
+
+ found := false
+ for _, p := range c.config.AllowedProtocols {
+ if p == c.protocol {
+ found = true
+ break
+ }
+ }
+ if !found {
+ err = fmt.Errorf("Unsupported plugin protocol %q. Supported: %v",
+ c.protocol, c.config.AllowedProtocols)
+ return
+ }
+
}
c.address = addr
@@ -555,9 +687,46 @@ func (c *Client) ReattachConfig() *ReattachConfig {
}
return &ReattachConfig{
- Addr: c.address,
- Pid: c.config.Cmd.Process.Pid,
+ Protocol: c.protocol,
+ Addr: c.address,
+ Pid: c.config.Cmd.Process.Pid,
+ }
+}
+
+// Protocol returns the protocol of server on the remote end. This will
+// start the plugin process if it isn't already started. Errors from
+// starting the plugin are surpressed and ProtocolInvalid is returned. It
+// is recommended you call Start explicitly before calling Protocol to ensure
+// no errors occur.
+func (c *Client) Protocol() Protocol {
+ _, err := c.Start()
+ if err != nil {
+ return ProtocolInvalid
+ }
+
+ return c.protocol
+}
+
+// dialer is compatible with grpc.WithDialer and creates the connection
+// to the plugin.
+func (c *Client) dialer(_ string, timeout time.Duration) (net.Conn, error) {
+ // Connect to the client
+ conn, err := net.Dial(c.address.Network(), c.address.String())
+ if err != nil {
+ return nil, err
+ }
+ if tcpConn, ok := conn.(*net.TCPConn); ok {
+ // Make sure to set keep alive so that the connection doesn't die
+ tcpConn.SetKeepAlive(true)
}
+
+ // If we have a TLS config we wrap our connection. We only do this
+ // for net/rpc since gRPC uses its own mechanism for TLS.
+ if c.protocol == ProtocolNetRPC && c.config.TLSConfig != nil {
+ conn = tls.Client(conn, c.config.TLSConfig)
+ }
+
+ return conn, nil
}
func (c *Client) logStderr(r io.Reader) {
@@ -566,9 +735,31 @@ func (c *Client) logStderr(r io.Reader) {
line, err := bufR.ReadString('\n')
if line != "" {
c.config.Stderr.Write([]byte(line))
-
line = strings.TrimRightFunc(line, unicode.IsSpace)
- log.Printf("[DEBUG] plugin: %s: %s", filepath.Base(c.config.Cmd.Path), line)
+
+ l := c.logger.Named(filepath.Base(c.config.Cmd.Path))
+
+ entry, err := parseJSON(line)
+ // If output is not JSON format, print directly to Debug
+ if err != nil {
+ l.Debug(line)
+ } else {
+ out := flattenKVPairs(entry.KVPairs)
+
+ l = l.With("timestamp", entry.Timestamp.Format(hclog.TimeFormat))
+ switch hclog.LevelFromString(entry.Level) {
+ case hclog.Trace:
+ l.Trace(entry.Message, out...)
+ case hclog.Debug:
+ l.Debug(entry.Message, out...)
+ case hclog.Info:
+ l.Info(entry.Message, out...)
+ case hclog.Warn:
+ l.Warn(entry.Message, out...)
+ case hclog.Error:
+ l.Error(entry.Message, out...)
+ }
+ }
}
if err == io.EOF {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_client.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_client.go
new file mode 100644
index 00000000..3bcf95ef
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_client.go
@@ -0,0 +1,83 @@
+package plugin
+
+import (
+ "fmt"
+
+ "golang.org/x/net/context"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials"
+ "google.golang.org/grpc/health/grpc_health_v1"
+)
+
+// newGRPCClient creates a new GRPCClient. The Client argument is expected
+// to be successfully started already with a lock held.
+func newGRPCClient(c *Client) (*GRPCClient, error) {
+ // Build dialing options.
+ opts := make([]grpc.DialOption, 0, 5)
+
+ // We use a custom dialer so that we can connect over unix domain sockets
+ opts = append(opts, grpc.WithDialer(c.dialer))
+
+ // go-plugin expects to block the connection
+ opts = append(opts, grpc.WithBlock())
+
+ // Fail right away
+ opts = append(opts, grpc.FailOnNonTempDialError(true))
+
+ // If we have no TLS configuration set, we need to explicitly tell grpc
+ // that we're connecting with an insecure connection.
+ if c.config.TLSConfig == nil {
+ opts = append(opts, grpc.WithInsecure())
+ } else {
+ opts = append(opts, grpc.WithTransportCredentials(
+ credentials.NewTLS(c.config.TLSConfig)))
+ }
+
+ // Connect. Note the first parameter is unused because we use a custom
+ // dialer that has the state to see the address.
+ conn, err := grpc.Dial("unused", opts...)
+ if err != nil {
+ return nil, err
+ }
+
+ return &GRPCClient{
+ Conn: conn,
+ Plugins: c.config.Plugins,
+ }, nil
+}
+
+// GRPCClient connects to a GRPCServer over gRPC to dispense plugin types.
+type GRPCClient struct {
+ Conn *grpc.ClientConn
+ Plugins map[string]Plugin
+}
+
+// ClientProtocol impl.
+func (c *GRPCClient) Close() error {
+ return c.Conn.Close()
+}
+
+// ClientProtocol impl.
+func (c *GRPCClient) Dispense(name string) (interface{}, error) {
+ raw, ok := c.Plugins[name]
+ if !ok {
+ return nil, fmt.Errorf("unknown plugin type: %s", name)
+ }
+
+ p, ok := raw.(GRPCPlugin)
+ if !ok {
+ return nil, fmt.Errorf("plugin %q doesn't support gRPC", name)
+ }
+
+ return p.GRPCClient(c.Conn)
+}
+
+// ClientProtocol impl.
+func (c *GRPCClient) Ping() error {
+ client := grpc_health_v1.NewHealthClient(c.Conn)
+ _, err := client.Check(context.Background(), &grpc_health_v1.HealthCheckRequest{
+ Service: GRPCServiceName,
+ })
+
+ return err
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_server.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_server.go
new file mode 100644
index 00000000..177a0cdd
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/grpc_server.go
@@ -0,0 +1,115 @@
+package plugin
+
+import (
+ "bytes"
+ "crypto/tls"
+ "encoding/json"
+ "fmt"
+ "io"
+ "net"
+
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials"
+ "google.golang.org/grpc/health"
+ "google.golang.org/grpc/health/grpc_health_v1"
+)
+
+// GRPCServiceName is the name of the service that the health check should
+// return as passing.
+const GRPCServiceName = "plugin"
+
+// DefaultGRPCServer can be used with the "GRPCServer" field for Server
+// as a default factory method to create a gRPC server with no extra options.
+func DefaultGRPCServer(opts []grpc.ServerOption) *grpc.Server {
+ return grpc.NewServer(opts...)
+}
+
+// GRPCServer is a ServerType implementation that serves plugins over
+// gRPC. This allows plugins to easily be written for other languages.
+//
+// The GRPCServer outputs a custom configuration as a base64-encoded
+// JSON structure represented by the GRPCServerConfig config structure.
+type GRPCServer struct {
+ // Plugins are the list of plugins to serve.
+ Plugins map[string]Plugin
+
+ // Server is the actual server that will accept connections. This
+ // will be used for plugin registration as well.
+ Server func([]grpc.ServerOption) *grpc.Server
+
+ // TLS should be the TLS configuration if available. If this is nil,
+ // the connection will not have transport security.
+ TLS *tls.Config
+
+ // DoneCh is the channel that is closed when this server has exited.
+ DoneCh chan struct{}
+
+ // Stdout/StderrLis are the readers for stdout/stderr that will be copied
+ // to the stdout/stderr connection that is output.
+ Stdout io.Reader
+ Stderr io.Reader
+
+ config GRPCServerConfig
+ server *grpc.Server
+}
+
+// ServerProtocol impl.
+func (s *GRPCServer) Init() error {
+ // Create our server
+ var opts []grpc.ServerOption
+ if s.TLS != nil {
+ opts = append(opts, grpc.Creds(credentials.NewTLS(s.TLS)))
+ }
+ s.server = s.Server(opts)
+
+ // Register the health service
+ healthCheck := health.NewServer()
+ healthCheck.SetServingStatus(
+ GRPCServiceName, grpc_health_v1.HealthCheckResponse_SERVING)
+ grpc_health_v1.RegisterHealthServer(s.server, healthCheck)
+
+ // Register all our plugins onto the gRPC server.
+ for k, raw := range s.Plugins {
+ p, ok := raw.(GRPCPlugin)
+ if !ok {
+ return fmt.Errorf("%q is not a GRPC-compatibile plugin", k)
+ }
+
+ if err := p.GRPCServer(s.server); err != nil {
+ return fmt.Errorf("error registring %q: %s", k, err)
+ }
+ }
+
+ return nil
+}
+
+// Config is the GRPCServerConfig encoded as JSON then base64.
+func (s *GRPCServer) Config() string {
+ // Create a buffer that will contain our final contents
+ var buf bytes.Buffer
+
+ // Wrap the base64 encoding with JSON encoding.
+ if err := json.NewEncoder(&buf).Encode(s.config); err != nil {
+ // We panic since ths shouldn't happen under any scenario. We
+ // carefully control the structure being encoded here and it should
+ // always be successful.
+ panic(err)
+ }
+
+ return buf.String()
+}
+
+func (s *GRPCServer) Serve(lis net.Listener) {
+ // Start serving in a goroutine
+ go s.server.Serve(lis)
+
+ // Wait until graceful completion
+ <-s.DoneCh
+}
+
+// GRPCServerConfig is the extra configuration passed along for consumers
+// to facilitate using GRPC plugins.
+type GRPCServerConfig struct {
+ StdoutAddr string `json:"stdout_addr"`
+ StderrAddr string `json:"stderr_addr"`
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/log_entry.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/log_entry.go
new file mode 100644
index 00000000..2996c14c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/log_entry.go
@@ -0,0 +1,73 @@
+package plugin
+
+import (
+ "encoding/json"
+ "time"
+)
+
+// logEntry is the JSON payload that gets sent to Stderr from the plugin to the host
+type logEntry struct {
+ Message string `json:"@message"`
+ Level string `json:"@level"`
+ Timestamp time.Time `json:"timestamp"`
+ KVPairs []*logEntryKV `json:"kv_pairs"`
+}
+
+// logEntryKV is a key value pair within the Output payload
+type logEntryKV struct {
+ Key string `json:"key"`
+ Value interface{} `json:"value"`
+}
+
+// flattenKVPairs is used to flatten KVPair slice into []interface{}
+// for hclog consumption.
+func flattenKVPairs(kvs []*logEntryKV) []interface{} {
+ var result []interface{}
+ for _, kv := range kvs {
+ result = append(result, kv.Key)
+ result = append(result, kv.Value)
+ }
+
+ return result
+}
+
+// parseJSON handles parsing JSON output
+func parseJSON(input string) (*logEntry, error) {
+ var raw map[string]interface{}
+ entry := &logEntry{}
+
+ err := json.Unmarshal([]byte(input), &raw)
+ if err != nil {
+ return nil, err
+ }
+
+ // Parse hclog-specific objects
+ if v, ok := raw["@message"]; ok {
+ entry.Message = v.(string)
+ delete(raw, "@message")
+ }
+
+ if v, ok := raw["@level"]; ok {
+ entry.Level = v.(string)
+ delete(raw, "@level")
+ }
+
+ if v, ok := raw["@timestamp"]; ok {
+ t, err := time.Parse("2006-01-02T15:04:05.000000Z07:00", v.(string))
+ if err != nil {
+ return nil, err
+ }
+ entry.Timestamp = t
+ delete(raw, "@timestamp")
+ }
+
+ // Parse dynamic KV args from the hclog payload.
+ for k, v := range raw {
+ entry.KVPairs = append(entry.KVPairs, &logEntryKV{
+ Key: k,
+ Value: v,
+ })
+ }
+
+ return entry, nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/plugin.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/plugin.go
index 37c8fd65..6b7bdd1c 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/plugin.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/plugin.go
@@ -9,7 +9,10 @@
package plugin
import (
+ "errors"
"net/rpc"
+
+ "google.golang.org/grpc"
)
// Plugin is the interface that is implemented to serve/connect to an
@@ -23,3 +26,31 @@ type Plugin interface {
// serving that communicates to the server end of the plugin.
Client(*MuxBroker, *rpc.Client) (interface{}, error)
}
+
+// GRPCPlugin is the interface that is implemented to serve/connect to
+// a plugin over gRPC.
+type GRPCPlugin interface {
+ // GRPCServer should register this plugin for serving with the
+ // given GRPCServer. Unlike Plugin.Server, this is only called once
+ // since gRPC plugins serve singletons.
+ GRPCServer(*grpc.Server) error
+
+ // GRPCClient should return the interface implementation for the plugin
+ // you're serving via gRPC.
+ GRPCClient(*grpc.ClientConn) (interface{}, error)
+}
+
+// NetRPCUnsupportedPlugin implements Plugin but returns errors for the
+// Server and Client functions. This will effectively disable support for
+// net/rpc based plugins.
+//
+// This struct can be embedded in your struct.
+type NetRPCUnsupportedPlugin struct{}
+
+func (p NetRPCUnsupportedPlugin) Server(*MuxBroker) (interface{}, error) {
+ return nil, errors.New("net/rpc plugin protocol not supported")
+}
+
+func (p NetRPCUnsupportedPlugin) Client(*MuxBroker, *rpc.Client) (interface{}, error) {
+ return nil, errors.New("net/rpc plugin protocol not supported")
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/protocol.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/protocol.go
new file mode 100644
index 00000000..0cfc19e5
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/protocol.go
@@ -0,0 +1,45 @@
+package plugin
+
+import (
+ "io"
+ "net"
+)
+
+// Protocol is an enum representing the types of protocols.
+type Protocol string
+
+const (
+ ProtocolInvalid Protocol = ""
+ ProtocolNetRPC Protocol = "netrpc"
+ ProtocolGRPC Protocol = "grpc"
+)
+
+// ServerProtocol is an interface that must be implemented for new plugin
+// protocols to be servers.
+type ServerProtocol interface {
+ // Init is called once to configure and initialize the protocol, but
+ // not start listening. This is the point at which all validation should
+ // be done and errors returned.
+ Init() error
+
+ // Config is extra configuration to be outputted to stdout. This will
+ // be automatically base64 encoded to ensure it can be parsed properly.
+ // This can be an empty string if additional configuration is not needed.
+ Config() string
+
+ // Serve is called to serve connections on the given listener. This should
+ // continue until the listener is closed.
+ Serve(net.Listener)
+}
+
+// ClientProtocol is an interface that must be implemented for new plugin
+// protocols to be clients.
+type ClientProtocol interface {
+ io.Closer
+
+ // Dispense dispenses a new instance of the plugin with the given name.
+ Dispense(string) (interface{}, error)
+
+ // Ping checks that the client connection is still healthy.
+ Ping() error
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_client.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_client.go
index 29f9bf06..f30a4b1d 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_client.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_client.go
@@ -1,6 +1,7 @@
package plugin
import (
+ "crypto/tls"
"fmt"
"io"
"net"
@@ -19,6 +20,42 @@ type RPCClient struct {
stdout, stderr net.Conn
}
+// newRPCClient creates a new RPCClient. The Client argument is expected
+// to be successfully started already with a lock held.
+func newRPCClient(c *Client) (*RPCClient, error) {
+ // Connect to the client
+ conn, err := net.Dial(c.address.Network(), c.address.String())
+ if err != nil {
+ return nil, err
+ }
+ if tcpConn, ok := conn.(*net.TCPConn); ok {
+ // Make sure to set keep alive so that the connection doesn't die
+ tcpConn.SetKeepAlive(true)
+ }
+
+ if c.config.TLSConfig != nil {
+ conn = tls.Client(conn, c.config.TLSConfig)
+ }
+
+ // Create the actual RPC client
+ result, err := NewRPCClient(conn, c.config.Plugins)
+ if err != nil {
+ conn.Close()
+ return nil, err
+ }
+
+ // Begin the stream syncing so that stdin, out, err work properly
+ err = result.SyncStreams(
+ c.config.SyncStdout,
+ c.config.SyncStderr)
+ if err != nil {
+ result.Close()
+ return nil, err
+ }
+
+ return result, nil
+}
+
// NewRPCClient creates a client from an already-open connection-like value.
// Dial is typically used instead.
func NewRPCClient(conn io.ReadWriteCloser, plugins map[string]Plugin) (*RPCClient, error) {
@@ -121,3 +158,13 @@ func (c *RPCClient) Dispense(name string) (interface{}, error) {
return p.Client(c.broker, rpc.NewClient(conn))
}
+
+// Ping pings the connection to ensure it is still alive.
+//
+// The error from the RPC call is returned exactly if you want to inspect
+// it for further error analysis. Any error returned from here would indicate
+// that the connection to the plugin is not healthy.
+func (c *RPCClient) Ping() error {
+ var empty struct{}
+ return c.control.Call("Control.Ping", true, &empty)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_server.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_server.go
index 3984dc89..5bb18dd5 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/rpc_server.go
@@ -34,10 +34,14 @@ type RPCServer struct {
lock sync.Mutex
}
-// Accept accepts connections on a listener and serves requests for
-// each incoming connection. Accept blocks; the caller typically invokes
-// it in a go statement.
-func (s *RPCServer) Accept(lis net.Listener) {
+// ServerProtocol impl.
+func (s *RPCServer) Init() error { return nil }
+
+// ServerProtocol impl.
+func (s *RPCServer) Config() string { return "" }
+
+// ServerProtocol impl.
+func (s *RPCServer) Serve(lis net.Listener) {
for {
conn, err := lis.Accept()
if err != nil {
@@ -122,6 +126,14 @@ type controlServer struct {
server *RPCServer
}
+// Ping can be called to verify the connection (and likely the binary)
+// is still alive to a plugin.
+func (c *controlServer) Ping(
+ null bool, response *struct{}) error {
+ *response = struct{}{}
+ return nil
+}
+
func (c *controlServer) Quit(
null bool, response *struct{}) error {
// End the server
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/server.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/server.go
index b5c5270a..e1543214 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/server.go
@@ -1,6 +1,8 @@
package plugin
import (
+ "crypto/tls"
+ "encoding/base64"
"errors"
"fmt"
"io/ioutil"
@@ -11,6 +13,10 @@ import (
"runtime"
"strconv"
"sync/atomic"
+
+ "github.com/hashicorp/go-hclog"
+
+ "google.golang.org/grpc"
)
// CoreProtocolVersion is the ProtocolVersion of the plugin system itself.
@@ -45,14 +51,37 @@ type ServeConfig struct {
// HandshakeConfig is the configuration that must match clients.
HandshakeConfig
+ // TLSProvider is a function that returns a configured tls.Config.
+ TLSProvider func() (*tls.Config, error)
+
// Plugins are the plugins that are served.
Plugins map[string]Plugin
+
+ // GRPCServer should be non-nil to enable serving the plugins over
+ // gRPC. This is a function to create the server when needed with the
+ // given server options. The server options populated by go-plugin will
+ // be for TLS if set. You may modify the input slice.
+ //
+ // Note that the grpc.Server will automatically be registered with
+ // the gRPC health checking service. This is not optional since go-plugin
+ // relies on this to implement Ping().
+ GRPCServer func([]grpc.ServerOption) *grpc.Server
+}
+
+// Protocol returns the protocol that this server should speak.
+func (c *ServeConfig) Protocol() Protocol {
+ result := ProtocolNetRPC
+ if c.GRPCServer != nil {
+ result = ProtocolGRPC
+ }
+
+ return result
}
// Serve serves the plugins given by ServeConfig.
//
// Serve doesn't return until the plugin is done being executed. Any
-// errors will be outputted to the log.
+// errors will be outputted to os.Stderr.
//
// This is the method that plugins should call in their main() functions.
func Serve(opts *ServeConfig) {
@@ -77,6 +106,13 @@ func Serve(opts *ServeConfig) {
// Logging goes to the original stderr
log.SetOutput(os.Stderr)
+ // internal logger to os.Stderr
+ logger := hclog.New(&hclog.LoggerOptions{
+ Level: hclog.Trace,
+ Output: os.Stderr,
+ JSONFormat: true,
+ })
+
// Create our new stdout, stderr files. These will override our built-in
// stdout/stderr so that it works across the stream boundary.
stdout_r, stdout_w, err := os.Pipe()
@@ -93,30 +129,86 @@ func Serve(opts *ServeConfig) {
// Register a listener so we can accept a connection
listener, err := serverListener()
if err != nil {
- log.Printf("[ERR] plugin: plugin init: %s", err)
+ logger.Error("plugin init error", "error", err)
return
}
- defer listener.Close()
+
+ // Close the listener on return. We wrap this in a func() on purpose
+ // because the "listener" reference may change to TLS.
+ defer func() {
+ listener.Close()
+ }()
+
+ var tlsConfig *tls.Config
+ if opts.TLSProvider != nil {
+ tlsConfig, err = opts.TLSProvider()
+ if err != nil {
+ logger.Error("plugin tls init", "error", err)
+ return
+ }
+ }
// Create the channel to tell us when we're done
doneCh := make(chan struct{})
- // Create the RPC server to dispense
- server := &RPCServer{
- Plugins: opts.Plugins,
- Stdout: stdout_r,
- Stderr: stderr_r,
- DoneCh: doneCh,
+ // Build the server type
+ var server ServerProtocol
+ switch opts.Protocol() {
+ case ProtocolNetRPC:
+ // If we have a TLS configuration then we wrap the listener
+ // ourselves and do it at that level.
+ if tlsConfig != nil {
+ listener = tls.NewListener(listener, tlsConfig)
+ }
+
+ // Create the RPC server to dispense
+ server = &RPCServer{
+ Plugins: opts.Plugins,
+ Stdout: stdout_r,
+ Stderr: stderr_r,
+ DoneCh: doneCh,
+ }
+
+ case ProtocolGRPC:
+ // Create the gRPC server
+ server = &GRPCServer{
+ Plugins: opts.Plugins,
+ Server: opts.GRPCServer,
+ TLS: tlsConfig,
+ Stdout: stdout_r,
+ Stderr: stderr_r,
+ DoneCh: doneCh,
+ }
+
+ default:
+ panic("unknown server protocol: " + opts.Protocol())
+ }
+
+ // Initialize the servers
+ if err := server.Init(); err != nil {
+ logger.Error("protocol init", "error", err)
+ return
}
+ // Build the extra configuration
+ extra := ""
+ if v := server.Config(); v != "" {
+ extra = base64.StdEncoding.EncodeToString([]byte(v))
+ }
+ if extra != "" {
+ extra = "|" + extra
+ }
+
+ logger.Debug("plugin address", "network", listener.Addr().Network(), "address", listener.Addr().String())
+
// Output the address and service name to stdout so that core can bring it up.
- log.Printf("[DEBUG] plugin: plugin address: %s %s\n",
- listener.Addr().Network(), listener.Addr().String())
- fmt.Printf("%d|%d|%s|%s\n",
+ fmt.Printf("%d|%d|%s|%s|%s%s\n",
CoreProtocolVersion,
opts.ProtocolVersion,
listener.Addr().Network(),
- listener.Addr().String())
+ listener.Addr().String(),
+ opts.Protocol(),
+ extra)
os.Stdout.Sync()
// Eat the interrupts
@@ -127,9 +219,7 @@ func Serve(opts *ServeConfig) {
for {
<-ch
newCount := atomic.AddInt32(&count, 1)
- log.Printf(
- "[DEBUG] plugin: received interrupt signal (count: %d). Ignoring.",
- newCount)
+ logger.Debug("plugin received interrupt signal, ignoring", "count", newCount)
}
}()
@@ -137,10 +227,8 @@ func Serve(opts *ServeConfig) {
os.Stdout = stdout_w
os.Stderr = stderr_w
- // Serve
- go server.Accept(listener)
-
- // Wait for the graceful exit
+ // Accept connections and wait for completion
+ go server.Serve(listener)
<-doneCh
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/testing.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/testing.go
index 9086a1b4..c6bf7c4e 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/testing.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/go-plugin/testing.go
@@ -4,7 +4,9 @@ import (
"bytes"
"net"
"net/rpc"
- "testing"
+
+ "github.com/mitchellh/go-testing-interface"
+ "google.golang.org/grpc"
)
// The testing file contains test helpers that you can use outside of
@@ -12,7 +14,7 @@ import (
// TestConn is a helper function for returning a client and server
// net.Conn connected to each other.
-func TestConn(t *testing.T) (net.Conn, net.Conn) {
+func TestConn(t testing.T) (net.Conn, net.Conn) {
// Listen to any local port. This listener will be closed
// after a single connection is established.
l, err := net.Listen("tcp", "127.0.0.1:0")
@@ -46,7 +48,7 @@ func TestConn(t *testing.T) (net.Conn, net.Conn) {
}
// TestRPCConn returns a rpc client and server connected to each other.
-func TestRPCConn(t *testing.T) (*rpc.Client, *rpc.Server) {
+func TestRPCConn(t testing.T) (*rpc.Client, *rpc.Server) {
clientConn, serverConn := TestConn(t)
server := rpc.NewServer()
@@ -58,7 +60,7 @@ func TestRPCConn(t *testing.T) (*rpc.Client, *rpc.Server) {
// TestPluginRPCConn returns a plugin RPC client and server that are connected
// together and configured.
-func TestPluginRPCConn(t *testing.T, ps map[string]Plugin) (*RPCClient, *RPCServer) {
+func TestPluginRPCConn(t testing.T, ps map[string]Plugin) (*RPCClient, *RPCServer) {
// Create two net.Conns we can use to shuttle our control connection
clientConn, serverConn := TestConn(t)
@@ -74,3 +76,45 @@ func TestPluginRPCConn(t *testing.T, ps map[string]Plugin) (*RPCClient, *RPCServ
return client, server
}
+
+// TestPluginGRPCConn returns a plugin gRPC client and server that are connected
+// together and configured. This is used to test gRPC connections.
+func TestPluginGRPCConn(t testing.T, ps map[string]Plugin) (*GRPCClient, *GRPCServer) {
+ // Create a listener
+ l, err := net.Listen("tcp", "127.0.0.1:0")
+ if err != nil {
+ t.Fatalf("err: %s", err)
+ }
+
+ // Start up the server
+ server := &GRPCServer{
+ Plugins: ps,
+ Server: DefaultGRPCServer,
+ Stdout: new(bytes.Buffer),
+ Stderr: new(bytes.Buffer),
+ }
+ if err := server.Init(); err != nil {
+ t.Fatalf("err: %s", err)
+ }
+ go server.Serve(l)
+
+ // Connect to the server
+ conn, err := grpc.Dial(
+ l.Addr().String(),
+ grpc.WithBlock(),
+ grpc.WithInsecure())
+ if err != nil {
+ t.Fatalf("err: %s", err)
+ }
+
+ // Connection successful, close the listener
+ l.Close()
+
+ // Create the client
+ client := &GRPCClient{
+ Conn: conn,
+ Plugins: ps,
+ }
+
+ return client, server
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/2q.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/2q.go
deleted file mode 100644
index 337d9632..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/2q.go
+++ /dev/null
@@ -1,212 +0,0 @@
-package lru
-
-import (
- "fmt"
- "sync"
-
- "github.com/hashicorp/golang-lru/simplelru"
-)
-
-const (
- // Default2QRecentRatio is the ratio of the 2Q cache dedicated
- // to recently added entries that have only been accessed once.
- Default2QRecentRatio = 0.25
-
- // Default2QGhostEntries is the default ratio of ghost
- // entries kept to track entries recently evicted
- Default2QGhostEntries = 0.50
-)
-
-// TwoQueueCache is a thread-safe fixed size 2Q cache.
-// 2Q is an enhancement over the standard LRU cache
-// in that it tracks both frequently and recently used
-// entries separately. This avoids a burst in access to new
-// entries from evicting frequently used entries. It adds some
-// additional tracking overhead to the standard LRU cache, and is
-// computationally about 2x the cost, and adds some metadata over
-// head. The ARCCache is similar, but does not require setting any
-// parameters.
-type TwoQueueCache struct {
- size int
- recentSize int
-
- recent *simplelru.LRU
- frequent *simplelru.LRU
- recentEvict *simplelru.LRU
- lock sync.RWMutex
-}
-
-// New2Q creates a new TwoQueueCache using the default
-// values for the parameters.
-func New2Q(size int) (*TwoQueueCache, error) {
- return New2QParams(size, Default2QRecentRatio, Default2QGhostEntries)
-}
-
-// New2QParams creates a new TwoQueueCache using the provided
-// parameter values.
-func New2QParams(size int, recentRatio float64, ghostRatio float64) (*TwoQueueCache, error) {
- if size <= 0 {
- return nil, fmt.Errorf("invalid size")
- }
- if recentRatio < 0.0 || recentRatio > 1.0 {
- return nil, fmt.Errorf("invalid recent ratio")
- }
- if ghostRatio < 0.0 || ghostRatio > 1.0 {
- return nil, fmt.Errorf("invalid ghost ratio")
- }
-
- // Determine the sub-sizes
- recentSize := int(float64(size) * recentRatio)
- evictSize := int(float64(size) * ghostRatio)
-
- // Allocate the LRUs
- recent, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
- frequent, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
- recentEvict, err := simplelru.NewLRU(evictSize, nil)
- if err != nil {
- return nil, err
- }
-
- // Initialize the cache
- c := &TwoQueueCache{
- size: size,
- recentSize: recentSize,
- recent: recent,
- frequent: frequent,
- recentEvict: recentEvict,
- }
- return c, nil
-}
-
-func (c *TwoQueueCache) Get(key interface{}) (interface{}, bool) {
- c.lock.Lock()
- defer c.lock.Unlock()
-
- // Check if this is a frequent value
- if val, ok := c.frequent.Get(key); ok {
- return val, ok
- }
-
- // If the value is contained in recent, then we
- // promote it to frequent
- if val, ok := c.recent.Peek(key); ok {
- c.recent.Remove(key)
- c.frequent.Add(key, val)
- return val, ok
- }
-
- // No hit
- return nil, false
-}
-
-func (c *TwoQueueCache) Add(key, value interface{}) {
- c.lock.Lock()
- defer c.lock.Unlock()
-
- // Check if the value is frequently used already,
- // and just update the value
- if c.frequent.Contains(key) {
- c.frequent.Add(key, value)
- return
- }
-
- // Check if the value is recently used, and promote
- // the value into the frequent list
- if c.recent.Contains(key) {
- c.recent.Remove(key)
- c.frequent.Add(key, value)
- return
- }
-
- // If the value was recently evicted, add it to the
- // frequently used list
- if c.recentEvict.Contains(key) {
- c.ensureSpace(true)
- c.recentEvict.Remove(key)
- c.frequent.Add(key, value)
- return
- }
-
- // Add to the recently seen list
- c.ensureSpace(false)
- c.recent.Add(key, value)
- return
-}
-
-// ensureSpace is used to ensure we have space in the cache
-func (c *TwoQueueCache) ensureSpace(recentEvict bool) {
- // If we have space, nothing to do
- recentLen := c.recent.Len()
- freqLen := c.frequent.Len()
- if recentLen+freqLen < c.size {
- return
- }
-
- // If the recent buffer is larger than
- // the target, evict from there
- if recentLen > 0 && (recentLen > c.recentSize || (recentLen == c.recentSize && !recentEvict)) {
- k, _, _ := c.recent.RemoveOldest()
- c.recentEvict.Add(k, nil)
- return
- }
-
- // Remove from the frequent list otherwise
- c.frequent.RemoveOldest()
-}
-
-func (c *TwoQueueCache) Len() int {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.recent.Len() + c.frequent.Len()
-}
-
-func (c *TwoQueueCache) Keys() []interface{} {
- c.lock.RLock()
- defer c.lock.RUnlock()
- k1 := c.frequent.Keys()
- k2 := c.recent.Keys()
- return append(k1, k2...)
-}
-
-func (c *TwoQueueCache) Remove(key interface{}) {
- c.lock.Lock()
- defer c.lock.Unlock()
- if c.frequent.Remove(key) {
- return
- }
- if c.recent.Remove(key) {
- return
- }
- if c.recentEvict.Remove(key) {
- return
- }
-}
-
-func (c *TwoQueueCache) Purge() {
- c.lock.Lock()
- defer c.lock.Unlock()
- c.recent.Purge()
- c.frequent.Purge()
- c.recentEvict.Purge()
-}
-
-func (c *TwoQueueCache) Contains(key interface{}) bool {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.frequent.Contains(key) || c.recent.Contains(key)
-}
-
-func (c *TwoQueueCache) Peek(key interface{}) (interface{}, bool) {
- c.lock.RLock()
- defer c.lock.RUnlock()
- if val, ok := c.frequent.Peek(key); ok {
- return val, ok
- }
- return c.recent.Peek(key)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/LICENSE
deleted file mode 100644
index be2cc4df..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/LICENSE
+++ /dev/null
@@ -1,362 +0,0 @@
-Mozilla Public License, version 2.0
-
-1. Definitions
-
-1.1. "Contributor"
-
- means each individual or legal entity that creates, contributes to the
- creation of, or owns Covered Software.
-
-1.2. "Contributor Version"
-
- means the combination of the Contributions of others (if any) used by a
- Contributor and that particular Contributor's Contribution.
-
-1.3. "Contribution"
-
- means Covered Software of a particular Contributor.
-
-1.4. "Covered Software"
-
- means Source Code Form to which the initial Contributor has attached the
- notice in Exhibit A, the Executable Form of such Source Code Form, and
- Modifications of such Source Code Form, in each case including portions
- thereof.
-
-1.5. "Incompatible With Secondary Licenses"
- means
-
- a. that the initial Contributor has attached the notice described in
- Exhibit B to the Covered Software; or
-
- b. that the Covered Software was made available under the terms of
- version 1.1 or earlier of the License, but not also under the terms of
- a Secondary License.
-
-1.6. "Executable Form"
-
- means any form of the work other than Source Code Form.
-
-1.7. "Larger Work"
-
- means a work that combines Covered Software with other material, in a
- separate file or files, that is not Covered Software.
-
-1.8. "License"
-
- means this document.
-
-1.9. "Licensable"
-
- means having the right to grant, to the maximum extent possible, whether
- at the time of the initial grant or subsequently, any and all of the
- rights conveyed by this License.
-
-1.10. "Modifications"
-
- means any of the following:
-
- a. any file in Source Code Form that results from an addition to,
- deletion from, or modification of the contents of Covered Software; or
-
- b. any new file in Source Code Form that contains any Covered Software.
-
-1.11. "Patent Claims" of a Contributor
-
- means any patent claim(s), including without limitation, method,
- process, and apparatus claims, in any patent Licensable by such
- Contributor that would be infringed, but for the grant of the License,
- by the making, using, selling, offering for sale, having made, import,
- or transfer of either its Contributions or its Contributor Version.
-
-1.12. "Secondary License"
-
- means either the GNU General Public License, Version 2.0, the GNU Lesser
- General Public License, Version 2.1, the GNU Affero General Public
- License, Version 3.0, or any later versions of those licenses.
-
-1.13. "Source Code Form"
-
- means the form of the work preferred for making modifications.
-
-1.14. "You" (or "Your")
-
- means an individual or a legal entity exercising rights under this
- License. For legal entities, "You" includes any entity that controls, is
- controlled by, or is under common control with You. For purposes of this
- definition, "control" means (a) the power, direct or indirect, to cause
- the direction or management of such entity, whether by contract or
- otherwise, or (b) ownership of more than fifty percent (50%) of the
- outstanding shares or beneficial ownership of such entity.
-
-
-2. License Grants and Conditions
-
-2.1. Grants
-
- Each Contributor hereby grants You a world-wide, royalty-free,
- non-exclusive license:
-
- a. under intellectual property rights (other than patent or trademark)
- Licensable by such Contributor to use, reproduce, make available,
- modify, display, perform, distribute, and otherwise exploit its
- Contributions, either on an unmodified basis, with Modifications, or
- as part of a Larger Work; and
-
- b. under Patent Claims of such Contributor to make, use, sell, offer for
- sale, have made, import, and otherwise transfer either its
- Contributions or its Contributor Version.
-
-2.2. Effective Date
-
- The licenses granted in Section 2.1 with respect to any Contribution
- become effective for each Contribution on the date the Contributor first
- distributes such Contribution.
-
-2.3. Limitations on Grant Scope
-
- The licenses granted in this Section 2 are the only rights granted under
- this License. No additional rights or licenses will be implied from the
- distribution or licensing of Covered Software under this License.
- Notwithstanding Section 2.1(b) above, no patent license is granted by a
- Contributor:
-
- a. for any code that a Contributor has removed from Covered Software; or
-
- b. for infringements caused by: (i) Your and any other third party's
- modifications of Covered Software, or (ii) the combination of its
- Contributions with other software (except as part of its Contributor
- Version); or
-
- c. under Patent Claims infringed by Covered Software in the absence of
- its Contributions.
-
- This License does not grant any rights in the trademarks, service marks,
- or logos of any Contributor (except as may be necessary to comply with
- the notice requirements in Section 3.4).
-
-2.4. Subsequent Licenses
-
- No Contributor makes additional grants as a result of Your choice to
- distribute the Covered Software under a subsequent version of this
- License (see Section 10.2) or under the terms of a Secondary License (if
- permitted under the terms of Section 3.3).
-
-2.5. Representation
-
- Each Contributor represents that the Contributor believes its
- Contributions are its original creation(s) or it has sufficient rights to
- grant the rights to its Contributions conveyed by this License.
-
-2.6. Fair Use
-
- This License is not intended to limit any rights You have under
- applicable copyright doctrines of fair use, fair dealing, or other
- equivalents.
-
-2.7. Conditions
-
- Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
- Section 2.1.
-
-
-3. Responsibilities
-
-3.1. Distribution of Source Form
-
- All distribution of Covered Software in Source Code Form, including any
- Modifications that You create or to which You contribute, must be under
- the terms of this License. You must inform recipients that the Source
- Code Form of the Covered Software is governed by the terms of this
- License, and how they can obtain a copy of this License. You may not
- attempt to alter or restrict the recipients' rights in the Source Code
- Form.
-
-3.2. Distribution of Executable Form
-
- If You distribute Covered Software in Executable Form then:
-
- a. such Covered Software must also be made available in Source Code Form,
- as described in Section 3.1, and You must inform recipients of the
- Executable Form how they can obtain a copy of such Source Code Form by
- reasonable means in a timely manner, at a charge no more than the cost
- of distribution to the recipient; and
-
- b. You may distribute such Executable Form under the terms of this
- License, or sublicense it under different terms, provided that the
- license for the Executable Form does not attempt to limit or alter the
- recipients' rights in the Source Code Form under this License.
-
-3.3. Distribution of a Larger Work
-
- You may create and distribute a Larger Work under terms of Your choice,
- provided that You also comply with the requirements of this License for
- the Covered Software. If the Larger Work is a combination of Covered
- Software with a work governed by one or more Secondary Licenses, and the
- Covered Software is not Incompatible With Secondary Licenses, this
- License permits You to additionally distribute such Covered Software
- under the terms of such Secondary License(s), so that the recipient of
- the Larger Work may, at their option, further distribute the Covered
- Software under the terms of either this License or such Secondary
- License(s).
-
-3.4. Notices
-
- You may not remove or alter the substance of any license notices
- (including copyright notices, patent notices, disclaimers of warranty, or
- limitations of liability) contained within the Source Code Form of the
- Covered Software, except that You may alter any license notices to the
- extent required to remedy known factual inaccuracies.
-
-3.5. Application of Additional Terms
-
- You may choose to offer, and to charge a fee for, warranty, support,
- indemnity or liability obligations to one or more recipients of Covered
- Software. However, You may do so only on Your own behalf, and not on
- behalf of any Contributor. You must make it absolutely clear that any
- such warranty, support, indemnity, or liability obligation is offered by
- You alone, and You hereby agree to indemnify every Contributor for any
- liability incurred by such Contributor as a result of warranty, support,
- indemnity or liability terms You offer. You may include additional
- disclaimers of warranty and limitations of liability specific to any
- jurisdiction.
-
-4. Inability to Comply Due to Statute or Regulation
-
- If it is impossible for You to comply with any of the terms of this License
- with respect to some or all of the Covered Software due to statute,
- judicial order, or regulation then You must: (a) comply with the terms of
- this License to the maximum extent possible; and (b) describe the
- limitations and the code they affect. Such description must be placed in a
- text file included with all distributions of the Covered Software under
- this License. Except to the extent prohibited by statute or regulation,
- such description must be sufficiently detailed for a recipient of ordinary
- skill to be able to understand it.
-
-5. Termination
-
-5.1. The rights granted under this License will terminate automatically if You
- fail to comply with any of its terms. However, if You become compliant,
- then the rights granted under this License from a particular Contributor
- are reinstated (a) provisionally, unless and until such Contributor
- explicitly and finally terminates Your grants, and (b) on an ongoing
- basis, if such Contributor fails to notify You of the non-compliance by
- some reasonable means prior to 60 days after You have come back into
- compliance. Moreover, Your grants from a particular Contributor are
- reinstated on an ongoing basis if such Contributor notifies You of the
- non-compliance by some reasonable means, this is the first time You have
- received notice of non-compliance with this License from such
- Contributor, and You become compliant prior to 30 days after Your receipt
- of the notice.
-
-5.2. If You initiate litigation against any entity by asserting a patent
- infringement claim (excluding declaratory judgment actions,
- counter-claims, and cross-claims) alleging that a Contributor Version
- directly or indirectly infringes any patent, then the rights granted to
- You by any and all Contributors for the Covered Software under Section
- 2.1 of this License shall terminate.
-
-5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
- license agreements (excluding distributors and resellers) which have been
- validly granted by You or Your distributors under this License prior to
- termination shall survive termination.
-
-6. Disclaimer of Warranty
-
- Covered Software is provided under this License on an "as is" basis,
- without warranty of any kind, either expressed, implied, or statutory,
- including, without limitation, warranties that the Covered Software is free
- of defects, merchantable, fit for a particular purpose or non-infringing.
- The entire risk as to the quality and performance of the Covered Software
- is with You. Should any Covered Software prove defective in any respect,
- You (not any Contributor) assume the cost of any necessary servicing,
- repair, or correction. This disclaimer of warranty constitutes an essential
- part of this License. No use of any Covered Software is authorized under
- this License except under this disclaimer.
-
-7. Limitation of Liability
-
- Under no circumstances and under no legal theory, whether tort (including
- negligence), contract, or otherwise, shall any Contributor, or anyone who
- distributes Covered Software as permitted above, be liable to You for any
- direct, indirect, special, incidental, or consequential damages of any
- character including, without limitation, damages for lost profits, loss of
- goodwill, work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses, even if such party shall have been
- informed of the possibility of such damages. This limitation of liability
- shall not apply to liability for death or personal injury resulting from
- such party's negligence to the extent applicable law prohibits such
- limitation. Some jurisdictions do not allow the exclusion or limitation of
- incidental or consequential damages, so this exclusion and limitation may
- not apply to You.
-
-8. Litigation
-
- Any litigation relating to this License may be brought only in the courts
- of a jurisdiction where the defendant maintains its principal place of
- business and such litigation shall be governed by laws of that
- jurisdiction, without reference to its conflict-of-law provisions. Nothing
- in this Section shall prevent a party's ability to bring cross-claims or
- counter-claims.
-
-9. Miscellaneous
-
- This License represents the complete agreement concerning the subject
- matter hereof. If any provision of this License is held to be
- unenforceable, such provision shall be reformed only to the extent
- necessary to make it enforceable. Any law or regulation which provides that
- the language of a contract shall be construed against the drafter shall not
- be used to construe this License against a Contributor.
-
-
-10. Versions of the License
-
-10.1. New Versions
-
- Mozilla Foundation is the license steward. Except as provided in Section
- 10.3, no one other than the license steward has the right to modify or
- publish new versions of this License. Each version will be given a
- distinguishing version number.
-
-10.2. Effect of New Versions
-
- You may distribute the Covered Software under the terms of the version
- of the License under which You originally received the Covered Software,
- or under the terms of any subsequent version published by the license
- steward.
-
-10.3. Modified Versions
-
- If you create software not governed by this License, and you want to
- create a new license for such software, you may create and use a
- modified version of this License if you rename the license and remove
- any references to the name of the license steward (except to note that
- such modified license differs from this License).
-
-10.4. Distributing Source Code Form that is Incompatible With Secondary
- Licenses If You choose to distribute Source Code Form that is
- Incompatible With Secondary Licenses under the terms of this version of
- the License, the notice described in Exhibit B of this License must be
- attached.
-
-Exhibit A - Source Code Form License Notice
-
- This Source Code Form is subject to the
- terms of the Mozilla Public License, v.
- 2.0. If a copy of the MPL was not
- distributed with this file, You can
- obtain one at
- http://mozilla.org/MPL/2.0/.
-
-If it is not possible or desirable to put the notice in a particular file,
-then You may include the notice in a location (such as a LICENSE file in a
-relevant directory) where a recipient would be likely to look for such a
-notice.
-
-You may add additional accurate notices of copyright ownership.
-
-Exhibit B - "Incompatible With Secondary Licenses" Notice
-
- This Source Code Form is "Incompatible
- With Secondary Licenses", as defined by
- the Mozilla Public License, v. 2.0.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/README.md
deleted file mode 100644
index 33e58cfa..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-golang-lru
-==========
-
-This provides the `lru` package which implements a fixed-size
-thread safe LRU cache. It is based on the cache in Groupcache.
-
-Documentation
-=============
-
-Full docs are available on [Godoc](http://godoc.org/github.com/hashicorp/golang-lru)
-
-Example
-=======
-
-Using the LRU is very simple:
-
-```go
-l, _ := New(128)
-for i := 0; i < 256; i++ {
- l.Add(i, nil)
-}
-if l.Len() != 128 {
- panic(fmt.Sprintf("bad len: %v", l.Len()))
-}
-```
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/arc.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/arc.go
deleted file mode 100644
index a2a25281..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/arc.go
+++ /dev/null
@@ -1,257 +0,0 @@
-package lru
-
-import (
- "sync"
-
- "github.com/hashicorp/golang-lru/simplelru"
-)
-
-// ARCCache is a thread-safe fixed size Adaptive Replacement Cache (ARC).
-// ARC is an enhancement over the standard LRU cache in that tracks both
-// frequency and recency of use. This avoids a burst in access to new
-// entries from evicting the frequently used older entries. It adds some
-// additional tracking overhead to a standard LRU cache, computationally
-// it is roughly 2x the cost, and the extra memory overhead is linear
-// with the size of the cache. ARC has been patented by IBM, but is
-// similar to the TwoQueueCache (2Q) which requires setting parameters.
-type ARCCache struct {
- size int // Size is the total capacity of the cache
- p int // P is the dynamic preference towards T1 or T2
-
- t1 *simplelru.LRU // T1 is the LRU for recently accessed items
- b1 *simplelru.LRU // B1 is the LRU for evictions from t1
-
- t2 *simplelru.LRU // T2 is the LRU for frequently accessed items
- b2 *simplelru.LRU // B2 is the LRU for evictions from t2
-
- lock sync.RWMutex
-}
-
-// NewARC creates an ARC of the given size
-func NewARC(size int) (*ARCCache, error) {
- // Create the sub LRUs
- b1, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
- b2, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
- t1, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
- t2, err := simplelru.NewLRU(size, nil)
- if err != nil {
- return nil, err
- }
-
- // Initialize the ARC
- c := &ARCCache{
- size: size,
- p: 0,
- t1: t1,
- b1: b1,
- t2: t2,
- b2: b2,
- }
- return c, nil
-}
-
-// Get looks up a key's value from the cache.
-func (c *ARCCache) Get(key interface{}) (interface{}, bool) {
- c.lock.Lock()
- defer c.lock.Unlock()
-
- // Ff the value is contained in T1 (recent), then
- // promote it to T2 (frequent)
- if val, ok := c.t1.Peek(key); ok {
- c.t1.Remove(key)
- c.t2.Add(key, val)
- return val, ok
- }
-
- // Check if the value is contained in T2 (frequent)
- if val, ok := c.t2.Get(key); ok {
- return val, ok
- }
-
- // No hit
- return nil, false
-}
-
-// Add adds a value to the cache.
-func (c *ARCCache) Add(key, value interface{}) {
- c.lock.Lock()
- defer c.lock.Unlock()
-
- // Check if the value is contained in T1 (recent), and potentially
- // promote it to frequent T2
- if c.t1.Contains(key) {
- c.t1.Remove(key)
- c.t2.Add(key, value)
- return
- }
-
- // Check if the value is already in T2 (frequent) and update it
- if c.t2.Contains(key) {
- c.t2.Add(key, value)
- return
- }
-
- // Check if this value was recently evicted as part of the
- // recently used list
- if c.b1.Contains(key) {
- // T1 set is too small, increase P appropriately
- delta := 1
- b1Len := c.b1.Len()
- b2Len := c.b2.Len()
- if b2Len > b1Len {
- delta = b2Len / b1Len
- }
- if c.p+delta >= c.size {
- c.p = c.size
- } else {
- c.p += delta
- }
-
- // Potentially need to make room in the cache
- if c.t1.Len()+c.t2.Len() >= c.size {
- c.replace(false)
- }
-
- // Remove from B1
- c.b1.Remove(key)
-
- // Add the key to the frequently used list
- c.t2.Add(key, value)
- return
- }
-
- // Check if this value was recently evicted as part of the
- // frequently used list
- if c.b2.Contains(key) {
- // T2 set is too small, decrease P appropriately
- delta := 1
- b1Len := c.b1.Len()
- b2Len := c.b2.Len()
- if b1Len > b2Len {
- delta = b1Len / b2Len
- }
- if delta >= c.p {
- c.p = 0
- } else {
- c.p -= delta
- }
-
- // Potentially need to make room in the cache
- if c.t1.Len()+c.t2.Len() >= c.size {
- c.replace(true)
- }
-
- // Remove from B2
- c.b2.Remove(key)
-
- // Add the key to the frequntly used list
- c.t2.Add(key, value)
- return
- }
-
- // Potentially need to make room in the cache
- if c.t1.Len()+c.t2.Len() >= c.size {
- c.replace(false)
- }
-
- // Keep the size of the ghost buffers trim
- if c.b1.Len() > c.size-c.p {
- c.b1.RemoveOldest()
- }
- if c.b2.Len() > c.p {
- c.b2.RemoveOldest()
- }
-
- // Add to the recently seen list
- c.t1.Add(key, value)
- return
-}
-
-// replace is used to adaptively evict from either T1 or T2
-// based on the current learned value of P
-func (c *ARCCache) replace(b2ContainsKey bool) {
- t1Len := c.t1.Len()
- if t1Len > 0 && (t1Len > c.p || (t1Len == c.p && b2ContainsKey)) {
- k, _, ok := c.t1.RemoveOldest()
- if ok {
- c.b1.Add(k, nil)
- }
- } else {
- k, _, ok := c.t2.RemoveOldest()
- if ok {
- c.b2.Add(k, nil)
- }
- }
-}
-
-// Len returns the number of cached entries
-func (c *ARCCache) Len() int {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.t1.Len() + c.t2.Len()
-}
-
-// Keys returns all the cached keys
-func (c *ARCCache) Keys() []interface{} {
- c.lock.RLock()
- defer c.lock.RUnlock()
- k1 := c.t1.Keys()
- k2 := c.t2.Keys()
- return append(k1, k2...)
-}
-
-// Remove is used to purge a key from the cache
-func (c *ARCCache) Remove(key interface{}) {
- c.lock.Lock()
- defer c.lock.Unlock()
- if c.t1.Remove(key) {
- return
- }
- if c.t2.Remove(key) {
- return
- }
- if c.b1.Remove(key) {
- return
- }
- if c.b2.Remove(key) {
- return
- }
-}
-
-// Purge is used to clear the cache
-func (c *ARCCache) Purge() {
- c.lock.Lock()
- defer c.lock.Unlock()
- c.t1.Purge()
- c.t2.Purge()
- c.b1.Purge()
- c.b2.Purge()
-}
-
-// Contains is used to check if the cache contains a key
-// without updating recency or frequency.
-func (c *ARCCache) Contains(key interface{}) bool {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.t1.Contains(key) || c.t2.Contains(key)
-}
-
-// Peek is used to inspect the cache value of a key
-// without updating recency or frequency.
-func (c *ARCCache) Peek(key interface{}) (interface{}, bool) {
- c.lock.RLock()
- defer c.lock.RUnlock()
- if val, ok := c.t1.Peek(key); ok {
- return val, ok
- }
- return c.t2.Peek(key)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/lru.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/lru.go
deleted file mode 100644
index a6285f98..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/lru.go
+++ /dev/null
@@ -1,114 +0,0 @@
-// This package provides a simple LRU cache. It is based on the
-// LRU implementation in groupcache:
-// https://github.com/golang/groupcache/tree/master/lru
-package lru
-
-import (
- "sync"
-
- "github.com/hashicorp/golang-lru/simplelru"
-)
-
-// Cache is a thread-safe fixed size LRU cache.
-type Cache struct {
- lru *simplelru.LRU
- lock sync.RWMutex
-}
-
-// New creates an LRU of the given size
-func New(size int) (*Cache, error) {
- return NewWithEvict(size, nil)
-}
-
-// NewWithEvict constructs a fixed size cache with the given eviction
-// callback.
-func NewWithEvict(size int, onEvicted func(key interface{}, value interface{})) (*Cache, error) {
- lru, err := simplelru.NewLRU(size, simplelru.EvictCallback(onEvicted))
- if err != nil {
- return nil, err
- }
- c := &Cache{
- lru: lru,
- }
- return c, nil
-}
-
-// Purge is used to completely clear the cache
-func (c *Cache) Purge() {
- c.lock.Lock()
- c.lru.Purge()
- c.lock.Unlock()
-}
-
-// Add adds a value to the cache. Returns true if an eviction occurred.
-func (c *Cache) Add(key, value interface{}) bool {
- c.lock.Lock()
- defer c.lock.Unlock()
- return c.lru.Add(key, value)
-}
-
-// Get looks up a key's value from the cache.
-func (c *Cache) Get(key interface{}) (interface{}, bool) {
- c.lock.Lock()
- defer c.lock.Unlock()
- return c.lru.Get(key)
-}
-
-// Check if a key is in the cache, without updating the recent-ness
-// or deleting it for being stale.
-func (c *Cache) Contains(key interface{}) bool {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.lru.Contains(key)
-}
-
-// Returns the key value (or undefined if not found) without updating
-// the "recently used"-ness of the key.
-func (c *Cache) Peek(key interface{}) (interface{}, bool) {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.lru.Peek(key)
-}
-
-// ContainsOrAdd checks if a key is in the cache without updating the
-// recent-ness or deleting it for being stale, and if not, adds the value.
-// Returns whether found and whether an eviction occurred.
-func (c *Cache) ContainsOrAdd(key, value interface{}) (ok, evict bool) {
- c.lock.Lock()
- defer c.lock.Unlock()
-
- if c.lru.Contains(key) {
- return true, false
- } else {
- evict := c.lru.Add(key, value)
- return false, evict
- }
-}
-
-// Remove removes the provided key from the cache.
-func (c *Cache) Remove(key interface{}) {
- c.lock.Lock()
- c.lru.Remove(key)
- c.lock.Unlock()
-}
-
-// RemoveOldest removes the oldest item from the cache.
-func (c *Cache) RemoveOldest() {
- c.lock.Lock()
- c.lru.RemoveOldest()
- c.lock.Unlock()
-}
-
-// Keys returns a slice of the keys in the cache, from oldest to newest.
-func (c *Cache) Keys() []interface{} {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.lru.Keys()
-}
-
-// Len returns the number of items in the cache.
-func (c *Cache) Len() int {
- c.lock.RLock()
- defer c.lock.RUnlock()
- return c.lru.Len()
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go
deleted file mode 100644
index cb416b39..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/golang-lru/simplelru/lru.go
+++ /dev/null
@@ -1,160 +0,0 @@
-package simplelru
-
-import (
- "container/list"
- "errors"
-)
-
-// EvictCallback is used to get a callback when a cache entry is evicted
-type EvictCallback func(key interface{}, value interface{})
-
-// LRU implements a non-thread safe fixed size LRU cache
-type LRU struct {
- size int
- evictList *list.List
- items map[interface{}]*list.Element
- onEvict EvictCallback
-}
-
-// entry is used to hold a value in the evictList
-type entry struct {
- key interface{}
- value interface{}
-}
-
-// NewLRU constructs an LRU of the given size
-func NewLRU(size int, onEvict EvictCallback) (*LRU, error) {
- if size <= 0 {
- return nil, errors.New("Must provide a positive size")
- }
- c := &LRU{
- size: size,
- evictList: list.New(),
- items: make(map[interface{}]*list.Element),
- onEvict: onEvict,
- }
- return c, nil
-}
-
-// Purge is used to completely clear the cache
-func (c *LRU) Purge() {
- for k, v := range c.items {
- if c.onEvict != nil {
- c.onEvict(k, v.Value.(*entry).value)
- }
- delete(c.items, k)
- }
- c.evictList.Init()
-}
-
-// Add adds a value to the cache. Returns true if an eviction occurred.
-func (c *LRU) Add(key, value interface{}) bool {
- // Check for existing item
- if ent, ok := c.items[key]; ok {
- c.evictList.MoveToFront(ent)
- ent.Value.(*entry).value = value
- return false
- }
-
- // Add new item
- ent := &entry{key, value}
- entry := c.evictList.PushFront(ent)
- c.items[key] = entry
-
- evict := c.evictList.Len() > c.size
- // Verify size not exceeded
- if evict {
- c.removeOldest()
- }
- return evict
-}
-
-// Get looks up a key's value from the cache.
-func (c *LRU) Get(key interface{}) (value interface{}, ok bool) {
- if ent, ok := c.items[key]; ok {
- c.evictList.MoveToFront(ent)
- return ent.Value.(*entry).value, true
- }
- return
-}
-
-// Check if a key is in the cache, without updating the recent-ness
-// or deleting it for being stale.
-func (c *LRU) Contains(key interface{}) (ok bool) {
- _, ok = c.items[key]
- return ok
-}
-
-// Returns the key value (or undefined if not found) without updating
-// the "recently used"-ness of the key.
-func (c *LRU) Peek(key interface{}) (value interface{}, ok bool) {
- if ent, ok := c.items[key]; ok {
- return ent.Value.(*entry).value, true
- }
- return nil, ok
-}
-
-// Remove removes the provided key from the cache, returning if the
-// key was contained.
-func (c *LRU) Remove(key interface{}) bool {
- if ent, ok := c.items[key]; ok {
- c.removeElement(ent)
- return true
- }
- return false
-}
-
-// RemoveOldest removes the oldest item from the cache.
-func (c *LRU) RemoveOldest() (interface{}, interface{}, bool) {
- ent := c.evictList.Back()
- if ent != nil {
- c.removeElement(ent)
- kv := ent.Value.(*entry)
- return kv.key, kv.value, true
- }
- return nil, nil, false
-}
-
-// GetOldest returns the oldest entry
-func (c *LRU) GetOldest() (interface{}, interface{}, bool) {
- ent := c.evictList.Back()
- if ent != nil {
- kv := ent.Value.(*entry)
- return kv.key, kv.value, true
- }
- return nil, nil, false
-}
-
-// Keys returns a slice of the keys in the cache, from oldest to newest.
-func (c *LRU) Keys() []interface{} {
- keys := make([]interface{}, len(c.items))
- i := 0
- for ent := c.evictList.Back(); ent != nil; ent = ent.Prev() {
- keys[i] = ent.Value.(*entry).key
- i++
- }
- return keys
-}
-
-// Len returns the number of items in the cache.
-func (c *LRU) Len() int {
- return c.evictList.Len()
-}
-
-// removeOldest removes the oldest item from the cache.
-func (c *LRU) removeOldest() {
- ent := c.evictList.Back()
- if ent != nil {
- c.removeElement(ent)
- }
-}
-
-// removeElement is used to remove a given list element from the cache
-func (c *LRU) removeElement(e *list.Element) {
- c.evictList.Remove(e)
- kv := e.Value.(*entry)
- delete(c.items, kv.key)
- if c.onEvict != nil {
- c.onEvict(kv.key, kv.value)
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/LICENSE
deleted file mode 100644
index c33dcc7c..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/LICENSE
+++ /dev/null
@@ -1,354 +0,0 @@
-Mozilla Public License, version 2.0
-
-1. Definitions
-
-1.1. “Contributor”
-
- means each individual or legal entity that creates, contributes to the
- creation of, or owns Covered Software.
-
-1.2. “Contributor Version”
-
- means the combination of the Contributions of others (if any) used by a
- Contributor and that particular Contributor’s Contribution.
-
-1.3. “Contribution”
-
- means Covered Software of a particular Contributor.
-
-1.4. “Covered Software”
-
- means Source Code Form to which the initial Contributor has attached the
- notice in Exhibit A, the Executable Form of such Source Code Form, and
- Modifications of such Source Code Form, in each case including portions
- thereof.
-
-1.5. “Incompatible With Secondary Licenses”
- means
-
- a. that the initial Contributor has attached the notice described in
- Exhibit B to the Covered Software; or
-
- b. that the Covered Software was made available under the terms of version
- 1.1 or earlier of the License, but not also under the terms of a
- Secondary License.
-
-1.6. “Executable Form”
-
- means any form of the work other than Source Code Form.
-
-1.7. “Larger Work”
-
- means a work that combines Covered Software with other material, in a separate
- file or files, that is not Covered Software.
-
-1.8. “License”
-
- means this document.
-
-1.9. “Licensable”
-
- means having the right to grant, to the maximum extent possible, whether at the
- time of the initial grant or subsequently, any and all of the rights conveyed by
- this License.
-
-1.10. “Modifications”
-
- means any of the following:
-
- a. any file in Source Code Form that results from an addition to, deletion
- from, or modification of the contents of Covered Software; or
-
- b. any new file in Source Code Form that contains any Covered Software.
-
-1.11. “Patent Claims” of a Contributor
-
- means any patent claim(s), including without limitation, method, process,
- and apparatus claims, in any patent Licensable by such Contributor that
- would be infringed, but for the grant of the License, by the making,
- using, selling, offering for sale, having made, import, or transfer of
- either its Contributions or its Contributor Version.
-
-1.12. “Secondary License”
-
- means either the GNU General Public License, Version 2.0, the GNU Lesser
- General Public License, Version 2.1, the GNU Affero General Public
- License, Version 3.0, or any later versions of those licenses.
-
-1.13. “Source Code Form”
-
- means the form of the work preferred for making modifications.
-
-1.14. “You” (or “Your”)
-
- means an individual or a legal entity exercising rights under this
- License. For legal entities, “You” includes any entity that controls, is
- controlled by, or is under common control with You. For purposes of this
- definition, “control” means (a) the power, direct or indirect, to cause
- the direction or management of such entity, whether by contract or
- otherwise, or (b) ownership of more than fifty percent (50%) of the
- outstanding shares or beneficial ownership of such entity.
-
-
-2. License Grants and Conditions
-
-2.1. Grants
-
- Each Contributor hereby grants You a world-wide, royalty-free,
- non-exclusive license:
-
- a. under intellectual property rights (other than patent or trademark)
- Licensable by such Contributor to use, reproduce, make available,
- modify, display, perform, distribute, and otherwise exploit its
- Contributions, either on an unmodified basis, with Modifications, or as
- part of a Larger Work; and
-
- b. under Patent Claims of such Contributor to make, use, sell, offer for
- sale, have made, import, and otherwise transfer either its Contributions
- or its Contributor Version.
-
-2.2. Effective Date
-
- The licenses granted in Section 2.1 with respect to any Contribution become
- effective for each Contribution on the date the Contributor first distributes
- such Contribution.
-
-2.3. Limitations on Grant Scope
-
- The licenses granted in this Section 2 are the only rights granted under this
- License. No additional rights or licenses will be implied from the distribution
- or licensing of Covered Software under this License. Notwithstanding Section
- 2.1(b) above, no patent license is granted by a Contributor:
-
- a. for any code that a Contributor has removed from Covered Software; or
-
- b. for infringements caused by: (i) Your and any other third party’s
- modifications of Covered Software, or (ii) the combination of its
- Contributions with other software (except as part of its Contributor
- Version); or
-
- c. under Patent Claims infringed by Covered Software in the absence of its
- Contributions.
-
- This License does not grant any rights in the trademarks, service marks, or
- logos of any Contributor (except as may be necessary to comply with the
- notice requirements in Section 3.4).
-
-2.4. Subsequent Licenses
-
- No Contributor makes additional grants as a result of Your choice to
- distribute the Covered Software under a subsequent version of this License
- (see Section 10.2) or under the terms of a Secondary License (if permitted
- under the terms of Section 3.3).
-
-2.5. Representation
-
- Each Contributor represents that the Contributor believes its Contributions
- are its original creation(s) or it has sufficient rights to grant the
- rights to its Contributions conveyed by this License.
-
-2.6. Fair Use
-
- This License is not intended to limit any rights You have under applicable
- copyright doctrines of fair use, fair dealing, or other equivalents.
-
-2.7. Conditions
-
- Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
- Section 2.1.
-
-
-3. Responsibilities
-
-3.1. Distribution of Source Form
-
- All distribution of Covered Software in Source Code Form, including any
- Modifications that You create or to which You contribute, must be under the
- terms of this License. You must inform recipients that the Source Code Form
- of the Covered Software is governed by the terms of this License, and how
- they can obtain a copy of this License. You may not attempt to alter or
- restrict the recipients’ rights in the Source Code Form.
-
-3.2. Distribution of Executable Form
-
- If You distribute Covered Software in Executable Form then:
-
- a. such Covered Software must also be made available in Source Code Form,
- as described in Section 3.1, and You must inform recipients of the
- Executable Form how they can obtain a copy of such Source Code Form by
- reasonable means in a timely manner, at a charge no more than the cost
- of distribution to the recipient; and
-
- b. You may distribute such Executable Form under the terms of this License,
- or sublicense it under different terms, provided that the license for
- the Executable Form does not attempt to limit or alter the recipients’
- rights in the Source Code Form under this License.
-
-3.3. Distribution of a Larger Work
-
- You may create and distribute a Larger Work under terms of Your choice,
- provided that You also comply with the requirements of this License for the
- Covered Software. If the Larger Work is a combination of Covered Software
- with a work governed by one or more Secondary Licenses, and the Covered
- Software is not Incompatible With Secondary Licenses, this License permits
- You to additionally distribute such Covered Software under the terms of
- such Secondary License(s), so that the recipient of the Larger Work may, at
- their option, further distribute the Covered Software under the terms of
- either this License or such Secondary License(s).
-
-3.4. Notices
-
- You may not remove or alter the substance of any license notices (including
- copyright notices, patent notices, disclaimers of warranty, or limitations
- of liability) contained within the Source Code Form of the Covered
- Software, except that You may alter any license notices to the extent
- required to remedy known factual inaccuracies.
-
-3.5. Application of Additional Terms
-
- You may choose to offer, and to charge a fee for, warranty, support,
- indemnity or liability obligations to one or more recipients of Covered
- Software. However, You may do so only on Your own behalf, and not on behalf
- of any Contributor. You must make it absolutely clear that any such
- warranty, support, indemnity, or liability obligation is offered by You
- alone, and You hereby agree to indemnify every Contributor for any
- liability incurred by such Contributor as a result of warranty, support,
- indemnity or liability terms You offer. You may include additional
- disclaimers of warranty and limitations of liability specific to any
- jurisdiction.
-
-4. Inability to Comply Due to Statute or Regulation
-
- If it is impossible for You to comply with any of the terms of this License
- with respect to some or all of the Covered Software due to statute, judicial
- order, or regulation then You must: (a) comply with the terms of this License
- to the maximum extent possible; and (b) describe the limitations and the code
- they affect. Such description must be placed in a text file included with all
- distributions of the Covered Software under this License. Except to the
- extent prohibited by statute or regulation, such description must be
- sufficiently detailed for a recipient of ordinary skill to be able to
- understand it.
-
-5. Termination
-
-5.1. The rights granted under this License will terminate automatically if You
- fail to comply with any of its terms. However, if You become compliant,
- then the rights granted under this License from a particular Contributor
- are reinstated (a) provisionally, unless and until such Contributor
- explicitly and finally terminates Your grants, and (b) on an ongoing basis,
- if such Contributor fails to notify You of the non-compliance by some
- reasonable means prior to 60 days after You have come back into compliance.
- Moreover, Your grants from a particular Contributor are reinstated on an
- ongoing basis if such Contributor notifies You of the non-compliance by
- some reasonable means, this is the first time You have received notice of
- non-compliance with this License from such Contributor, and You become
- compliant prior to 30 days after Your receipt of the notice.
-
-5.2. If You initiate litigation against any entity by asserting a patent
- infringement claim (excluding declaratory judgment actions, counter-claims,
- and cross-claims) alleging that a Contributor Version directly or
- indirectly infringes any patent, then the rights granted to You by any and
- all Contributors for the Covered Software under Section 2.1 of this License
- shall terminate.
-
-5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
- license agreements (excluding distributors and resellers) which have been
- validly granted by You or Your distributors under this License prior to
- termination shall survive termination.
-
-6. Disclaimer of Warranty
-
- Covered Software is provided under this License on an “as is” basis, without
- warranty of any kind, either expressed, implied, or statutory, including,
- without limitation, warranties that the Covered Software is free of defects,
- merchantable, fit for a particular purpose or non-infringing. The entire
- risk as to the quality and performance of the Covered Software is with You.
- Should any Covered Software prove defective in any respect, You (not any
- Contributor) assume the cost of any necessary servicing, repair, or
- correction. This disclaimer of warranty constitutes an essential part of this
- License. No use of any Covered Software is authorized under this License
- except under this disclaimer.
-
-7. Limitation of Liability
-
- Under no circumstances and under no legal theory, whether tort (including
- negligence), contract, or otherwise, shall any Contributor, or anyone who
- distributes Covered Software as permitted above, be liable to You for any
- direct, indirect, special, incidental, or consequential damages of any
- character including, without limitation, damages for lost profits, loss of
- goodwill, work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses, even if such party shall have been
- informed of the possibility of such damages. This limitation of liability
- shall not apply to liability for death or personal injury resulting from such
- party’s negligence to the extent applicable law prohibits such limitation.
- Some jurisdictions do not allow the exclusion or limitation of incidental or
- consequential damages, so this exclusion and limitation may not apply to You.
-
-8. Litigation
-
- Any litigation relating to this License may be brought only in the courts of
- a jurisdiction where the defendant maintains its principal place of business
- and such litigation shall be governed by laws of that jurisdiction, without
- reference to its conflict-of-law provisions. Nothing in this Section shall
- prevent a party’s ability to bring cross-claims or counter-claims.
-
-9. Miscellaneous
-
- This License represents the complete agreement concerning the subject matter
- hereof. If any provision of this License is held to be unenforceable, such
- provision shall be reformed only to the extent necessary to make it
- enforceable. Any law or regulation which provides that the language of a
- contract shall be construed against the drafter shall not be used to construe
- this License against a Contributor.
-
-
-10. Versions of the License
-
-10.1. New Versions
-
- Mozilla Foundation is the license steward. Except as provided in Section
- 10.3, no one other than the license steward has the right to modify or
- publish new versions of this License. Each version will be given a
- distinguishing version number.
-
-10.2. Effect of New Versions
-
- You may distribute the Covered Software under the terms of the version of
- the License under which You originally received the Covered Software, or
- under the terms of any subsequent version published by the license
- steward.
-
-10.3. Modified Versions
-
- If you create software not governed by this License, and you want to
- create a new license for such software, you may create and use a modified
- version of this License if you rename the license and remove any
- references to the name of the license steward (except to note that such
- modified license differs from this License).
-
-10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
- If You choose to distribute Source Code Form that is Incompatible With
- Secondary Licenses under the terms of this version of the License, the
- notice described in Exhibit B of this License must be attached.
-
-Exhibit A - Source Code Form License Notice
-
- This Source Code Form is subject to the
- terms of the Mozilla Public License, v.
- 2.0. If a copy of the MPL was not
- distributed with this file, You can
- obtain one at
- http://mozilla.org/MPL/2.0/.
-
-If it is not possible or desirable to put the notice in a particular file, then
-You may include the notice in a location (such as a LICENSE file in a relevant
-directory) where a recipient would be likely to look for such a notice.
-
-You may add additional accurate notices of copyright ownership.
-
-Exhibit B - “Incompatible With Secondary Licenses” Notice
-
- This Source Code Form is “Incompatible
- With Secondary Licenses”, as defined by
- the Mozilla Public License, v. 2.0.
-
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/Makefile b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/Makefile
deleted file mode 100644
index 49f82992..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/Makefile
+++ /dev/null
@@ -1,17 +0,0 @@
-DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...)
-
-test:
- go test -timeout=60s ./...
-
-integ: test
- INTEG_TESTS=yes go test -timeout=5s -run=Integ ./...
-
-deps:
- go get -d -v ./...
- echo $(DEPS) | xargs -n1 go get -d
-
-cov:
- INTEG_TESTS=yes gocov test github.com/hashicorp/raft | gocov-html > /tmp/coverage.html
- open /tmp/coverage.html
-
-.PHONY: test cov integ deps
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/README.md
deleted file mode 100644
index 8778b13d..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/README.md
+++ /dev/null
@@ -1,89 +0,0 @@
-raft [![Build Status](https://travis-ci.org/hashicorp/raft.png)](https://travis-ci.org/hashicorp/raft)
-====
-
-raft is a [Go](http://www.golang.org) library that manages a replicated
-log and can be used with an FSM to manage replicated state machines. It
-is library for providing [consensus](http://en.wikipedia.org/wiki/Consensus_(computer_science)).
-
-The use cases for such a library are far-reaching as replicated state
-machines are a key component of many distributed systems. They enable
-building Consistent, Partition Tolerant (CP) systems, with limited
-fault tolerance as well.
-
-## Building
-
-If you wish to build raft you'll need Go version 1.2+ installed.
-
-Please check your installation with:
-
-```
-go version
-```
-
-## Documentation
-
-For complete documentation, see the associated [Godoc](http://godoc.org/github.com/hashicorp/raft).
-
-To prevent complications with cgo, the primary backend `MDBStore` is in a separate repository,
-called [raft-mdb](http://github.com/hashicorp/raft-mdb). That is the recommended implementation
-for the `LogStore` and `StableStore`.
-
-A pure Go backend using [BoltDB](https://github.com/boltdb/bolt) is also available called
-[raft-boltdb](https://github.com/hashicorp/raft-boltdb). It can also be used as a `LogStore`
-and `StableStore`.
-
-## Protocol
-
-raft is based on ["Raft: In Search of an Understandable Consensus Algorithm"](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf)
-
-A high level overview of the Raft protocol is described below, but for details please read the full
-[Raft paper](https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf)
-followed by the raft source. Any questions about the raft protocol should be sent to the
-[raft-dev mailing list](https://groups.google.com/forum/#!forum/raft-dev).
-
-### Protocol Description
-
-Raft nodes are always in one of three states: follower, candidate or leader. All
-nodes initially start out as a follower. In this state, nodes can accept log entries
-from a leader and cast votes. If no entries are received for some time, nodes
-self-promote to the candidate state. In the candidate state nodes request votes from
-their peers. If a candidate receives a quorum of votes, then it is promoted to a leader.
-The leader must accept new log entries and replicate to all the other followers.
-In addition, if stale reads are not acceptable, all queries must also be performed on
-the leader.
-
-Once a cluster has a leader, it is able to accept new log entries. A client can
-request that a leader append a new log entry, which is an opaque binary blob to
-Raft. The leader then writes the entry to durable storage and attempts to replicate
-to a quorum of followers. Once the log entry is considered *committed*, it can be
-*applied* to a finite state machine. The finite state machine is application specific,
-and is implemented using an interface.
-
-An obvious question relates to the unbounded nature of a replicated log. Raft provides
-a mechanism by which the current state is snapshotted, and the log is compacted. Because
-of the FSM abstraction, restoring the state of the FSM must result in the same state
-as a replay of old logs. This allows Raft to capture the FSM state at a point in time,
-and then remove all the logs that were used to reach that state. This is performed automatically
-without user intervention, and prevents unbounded disk usage as well as minimizing
-time spent replaying logs.
-
-Lastly, there is the issue of updating the peer set when new servers are joining
-or existing servers are leaving. As long as a quorum of nodes is available, this
-is not an issue as Raft provides mechanisms to dynamically update the peer set.
-If a quorum of nodes is unavailable, then this becomes a very challenging issue.
-For example, suppose there are only 2 peers, A and B. The quorum size is also
-2, meaning both nodes must agree to commit a log entry. If either A or B fails,
-it is now impossible to reach quorum. This means the cluster is unable to add,
-or remove a node, or commit any additional log entries. This results in *unavailability*.
-At this point, manual intervention would be required to remove either A or B,
-and to restart the remaining node in bootstrap mode.
-
-A Raft cluster of 3 nodes can tolerate a single node failure, while a cluster
-of 5 can tolerate 2 node failures. The recommended configuration is to either
-run 3 or 5 raft servers. This maximizes availability without
-greatly sacrificing performance.
-
-In terms of performance, Raft is comparable to Paxos. Assuming stable leadership,
-committing a log entry requires a single round trip to half of the cluster.
-Thus performance is bound by disk I/O and network latency.
-
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/api.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/api.go
deleted file mode 100644
index 2fd78e78..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/api.go
+++ /dev/null
@@ -1,1007 +0,0 @@
-package raft
-
-import (
- "errors"
- "fmt"
- "io"
- "log"
- "os"
- "strconv"
- "sync"
- "time"
-
- "github.com/armon/go-metrics"
-)
-
-var (
- // ErrLeader is returned when an operation can't be completed on a
- // leader node.
- ErrLeader = errors.New("node is the leader")
-
- // ErrNotLeader is returned when an operation can't be completed on a
- // follower or candidate node.
- ErrNotLeader = errors.New("node is not the leader")
-
- // ErrLeadershipLost is returned when a leader fails to commit a log entry
- // because it's been deposed in the process.
- ErrLeadershipLost = errors.New("leadership lost while committing log")
-
- // ErrAbortedByRestore is returned when a leader fails to commit a log
- // entry because it's been superseded by a user snapshot restore.
- ErrAbortedByRestore = errors.New("snapshot restored while committing log")
-
- // ErrRaftShutdown is returned when operations are requested against an
- // inactive Raft.
- ErrRaftShutdown = errors.New("raft is already shutdown")
-
- // ErrEnqueueTimeout is returned when a command fails due to a timeout.
- ErrEnqueueTimeout = errors.New("timed out enqueuing operation")
-
- // ErrNothingNewToSnapshot is returned when trying to create a snapshot
- // but there's nothing new commited to the FSM since we started.
- ErrNothingNewToSnapshot = errors.New("nothing new to snapshot")
-
- // ErrUnsupportedProtocol is returned when an operation is attempted
- // that's not supported by the current protocol version.
- ErrUnsupportedProtocol = errors.New("operation not supported with current protocol version")
-
- // ErrCantBootstrap is returned when attempt is made to bootstrap a
- // cluster that already has state present.
- ErrCantBootstrap = errors.New("bootstrap only works on new clusters")
-)
-
-// Raft implements a Raft node.
-type Raft struct {
- raftState
-
- // protocolVersion is used to inter-operate with Raft servers running
- // different versions of the library. See comments in config.go for more
- // details.
- protocolVersion ProtocolVersion
-
- // applyCh is used to async send logs to the main thread to
- // be committed and applied to the FSM.
- applyCh chan *logFuture
-
- // Configuration provided at Raft initialization
- conf Config
-
- // FSM is the client state machine to apply commands to
- fsm FSM
-
- // fsmMutateCh is used to send state-changing updates to the FSM. This
- // receives pointers to commitTuple structures when applying logs or
- // pointers to restoreFuture structures when restoring a snapshot. We
- // need control over the order of these operations when doing user
- // restores so that we finish applying any old log applies before we
- // take a user snapshot on the leader, otherwise we might restore the
- // snapshot and apply old logs to it that were in the pipe.
- fsmMutateCh chan interface{}
-
- // fsmSnapshotCh is used to trigger a new snapshot being taken
- fsmSnapshotCh chan *reqSnapshotFuture
-
- // lastContact is the last time we had contact from the
- // leader node. This can be used to gauge staleness.
- lastContact time.Time
- lastContactLock sync.RWMutex
-
- // Leader is the current cluster leader
- leader ServerAddress
- leaderLock sync.RWMutex
-
- // leaderCh is used to notify of leadership changes
- leaderCh chan bool
-
- // leaderState used only while state is leader
- leaderState leaderState
-
- // Stores our local server ID, used to avoid sending RPCs to ourself
- localID ServerID
-
- // Stores our local addr
- localAddr ServerAddress
-
- // Used for our logging
- logger *log.Logger
-
- // LogStore provides durable storage for logs
- logs LogStore
-
- // Used to request the leader to make configuration changes.
- configurationChangeCh chan *configurationChangeFuture
-
- // Tracks the latest configuration and latest committed configuration from
- // the log/snapshot.
- configurations configurations
-
- // RPC chan comes from the transport layer
- rpcCh <-chan RPC
-
- // Shutdown channel to exit, protected to prevent concurrent exits
- shutdown bool
- shutdownCh chan struct{}
- shutdownLock sync.Mutex
-
- // snapshots is used to store and retrieve snapshots
- snapshots SnapshotStore
-
- // userSnapshotCh is used for user-triggered snapshots
- userSnapshotCh chan *userSnapshotFuture
-
- // userRestoreCh is used for user-triggered restores of external
- // snapshots
- userRestoreCh chan *userRestoreFuture
-
- // stable is a StableStore implementation for durable state
- // It provides stable storage for many fields in raftState
- stable StableStore
-
- // The transport layer we use
- trans Transport
-
- // verifyCh is used to async send verify futures to the main thread
- // to verify we are still the leader
- verifyCh chan *verifyFuture
-
- // configurationsCh is used to get the configuration data safely from
- // outside of the main thread.
- configurationsCh chan *configurationsFuture
-
- // bootstrapCh is used to attempt an initial bootstrap from outside of
- // the main thread.
- bootstrapCh chan *bootstrapFuture
-
- // List of observers and the mutex that protects them. The observers list
- // is indexed by an artificial ID which is used for deregistration.
- observersLock sync.RWMutex
- observers map[uint64]*Observer
-}
-
-// BootstrapCluster initializes a server's storage with the given cluster
-// configuration. This should only be called at the beginning of time for the
-// cluster, and you absolutely must make sure that you call it with the same
-// configuration on all the Voter servers. There is no need to bootstrap
-// Nonvoter and Staging servers.
-//
-// One sane approach is to boostrap a single server with a configuration
-// listing just itself as a Voter, then invoke AddVoter() on it to add other
-// servers to the cluster.
-func BootstrapCluster(conf *Config, logs LogStore, stable StableStore,
- snaps SnapshotStore, trans Transport, configuration Configuration) error {
- // Validate the Raft server config.
- if err := ValidateConfig(conf); err != nil {
- return err
- }
-
- // Sanity check the Raft peer configuration.
- if err := checkConfiguration(configuration); err != nil {
- return err
- }
-
- // Make sure the cluster is in a clean state.
- hasState, err := HasExistingState(logs, stable, snaps)
- if err != nil {
- return fmt.Errorf("failed to check for existing state: %v", err)
- }
- if hasState {
- return ErrCantBootstrap
- }
-
- // Set current term to 1.
- if err := stable.SetUint64(keyCurrentTerm, 1); err != nil {
- return fmt.Errorf("failed to save current term: %v", err)
- }
-
- // Append configuration entry to log.
- entry := &Log{
- Index: 1,
- Term: 1,
- }
- if conf.ProtocolVersion < 3 {
- entry.Type = LogRemovePeerDeprecated
- entry.Data = encodePeers(configuration, trans)
- } else {
- entry.Type = LogConfiguration
- entry.Data = encodeConfiguration(configuration)
- }
- if err := logs.StoreLog(entry); err != nil {
- return fmt.Errorf("failed to append configuration entry to log: %v", err)
- }
-
- return nil
-}
-
-// RecoverCluster is used to manually force a new configuration in order to
-// recover from a loss of quorum where the current configuration cannot be
-// restored, such as when several servers die at the same time. This works by
-// reading all the current state for this server, creating a snapshot with the
-// supplied configuration, and then truncating the Raft log. This is the only
-// safe way to force a given configuration without actually altering the log to
-// insert any new entries, which could cause conflicts with other servers with
-// different state.
-//
-// WARNING! This operation implicitly commits all entries in the Raft log, so
-// in general this is an extremely unsafe operation. If you've lost your other
-// servers and are performing a manual recovery, then you've also lost the
-// commit information, so this is likely the best you can do, but you should be
-// aware that calling this can cause Raft log entries that were in the process
-// of being replicated but not yet be committed to be committed.
-//
-// Note the FSM passed here is used for the snapshot operations and will be
-// left in a state that should not be used by the application. Be sure to
-// discard this FSM and any associated state and provide a fresh one when
-// calling NewRaft later.
-//
-// A typical way to recover the cluster is to shut down all servers and then
-// run RecoverCluster on every server using an identical configuration. When
-// the cluster is then restarted, and election should occur and then Raft will
-// resume normal operation. If it's desired to make a particular server the
-// leader, this can be used to inject a new configuration with that server as
-// the sole voter, and then join up other new clean-state peer servers using
-// the usual APIs in order to bring the cluster back into a known state.
-func RecoverCluster(conf *Config, fsm FSM, logs LogStore, stable StableStore,
- snaps SnapshotStore, trans Transport, configuration Configuration) error {
- // Validate the Raft server config.
- if err := ValidateConfig(conf); err != nil {
- return err
- }
-
- // Sanity check the Raft peer configuration.
- if err := checkConfiguration(configuration); err != nil {
- return err
- }
-
- // Refuse to recover if there's no existing state. This would be safe to
- // do, but it is likely an indication of an operator error where they
- // expect data to be there and it's not. By refusing, we force them
- // to show intent to start a cluster fresh by explicitly doing a
- // bootstrap, rather than quietly fire up a fresh cluster here.
- hasState, err := HasExistingState(logs, stable, snaps)
- if err != nil {
- return fmt.Errorf("failed to check for existing state: %v", err)
- }
- if !hasState {
- return fmt.Errorf("refused to recover cluster with no initial state, this is probably an operator error")
- }
-
- // Attempt to restore any snapshots we find, newest to oldest.
- var snapshotIndex uint64
- var snapshotTerm uint64
- snapshots, err := snaps.List()
- if err != nil {
- return fmt.Errorf("failed to list snapshots: %v", err)
- }
- for _, snapshot := range snapshots {
- _, source, err := snaps.Open(snapshot.ID)
- if err != nil {
- // Skip this one and try the next. We will detect if we
- // couldn't open any snapshots.
- continue
- }
- defer source.Close()
-
- if err := fsm.Restore(source); err != nil {
- // Same here, skip and try the next one.
- continue
- }
-
- snapshotIndex = snapshot.Index
- snapshotTerm = snapshot.Term
- break
- }
- if len(snapshots) > 0 && (snapshotIndex == 0 || snapshotTerm == 0) {
- return fmt.Errorf("failed to restore any of the available snapshots")
- }
-
- // The snapshot information is the best known end point for the data
- // until we play back the Raft log entries.
- lastIndex := snapshotIndex
- lastTerm := snapshotTerm
-
- // Apply any Raft log entries past the snapshot.
- lastLogIndex, err := logs.LastIndex()
- if err != nil {
- return fmt.Errorf("failed to find last log: %v", err)
- }
- for index := snapshotIndex + 1; index <= lastLogIndex; index++ {
- var entry Log
- if err := logs.GetLog(index, &entry); err != nil {
- return fmt.Errorf("failed to get log at index %d: %v", index, err)
- }
- if entry.Type == LogCommand {
- _ = fsm.Apply(&entry)
- }
- lastIndex = entry.Index
- lastTerm = entry.Term
- }
-
- // Create a new snapshot, placing the configuration in as if it was
- // committed at index 1.
- snapshot, err := fsm.Snapshot()
- if err != nil {
- return fmt.Errorf("failed to snapshot FSM: %v", err)
- }
- version := getSnapshotVersion(conf.ProtocolVersion)
- sink, err := snaps.Create(version, lastIndex, lastTerm, configuration, 1, trans)
- if err != nil {
- return fmt.Errorf("failed to create snapshot: %v", err)
- }
- if err := snapshot.Persist(sink); err != nil {
- return fmt.Errorf("failed to persist snapshot: %v", err)
- }
- if err := sink.Close(); err != nil {
- return fmt.Errorf("failed to finalize snapshot: %v", err)
- }
-
- // Compact the log so that we don't get bad interference from any
- // configuration change log entries that might be there.
- firstLogIndex, err := logs.FirstIndex()
- if err != nil {
- return fmt.Errorf("failed to get first log index: %v", err)
- }
- if err := logs.DeleteRange(firstLogIndex, lastLogIndex); err != nil {
- return fmt.Errorf("log compaction failed: %v", err)
- }
-
- return nil
-}
-
-// HasExistingState returns true if the server has any existing state (logs,
-// knowledge of a current term, or any snapshots).
-func HasExistingState(logs LogStore, stable StableStore, snaps SnapshotStore) (bool, error) {
- // Make sure we don't have a current term.
- currentTerm, err := stable.GetUint64(keyCurrentTerm)
- if err == nil {
- if currentTerm > 0 {
- return true, nil
- }
- } else {
- if err.Error() != "not found" {
- return false, fmt.Errorf("failed to read current term: %v", err)
- }
- }
-
- // Make sure we have an empty log.
- lastIndex, err := logs.LastIndex()
- if err != nil {
- return false, fmt.Errorf("failed to get last log index: %v", err)
- }
- if lastIndex > 0 {
- return true, nil
- }
-
- // Make sure we have no snapshots
- snapshots, err := snaps.List()
- if err != nil {
- return false, fmt.Errorf("failed to list snapshots: %v", err)
- }
- if len(snapshots) > 0 {
- return true, nil
- }
-
- return false, nil
-}
-
-// NewRaft is used to construct a new Raft node. It takes a configuration, as well
-// as implementations of various interfaces that are required. If we have any
-// old state, such as snapshots, logs, peers, etc, all those will be restored
-// when creating the Raft node.
-func NewRaft(conf *Config, fsm FSM, logs LogStore, stable StableStore, snaps SnapshotStore, trans Transport) (*Raft, error) {
- // Validate the configuration.
- if err := ValidateConfig(conf); err != nil {
- return nil, err
- }
-
- // Ensure we have a LogOutput.
- var logger *log.Logger
- if conf.Logger != nil {
- logger = conf.Logger
- } else {
- if conf.LogOutput == nil {
- conf.LogOutput = os.Stderr
- }
- logger = log.New(conf.LogOutput, "", log.LstdFlags)
- }
-
- // Try to restore the current term.
- currentTerm, err := stable.GetUint64(keyCurrentTerm)
- if err != nil && err.Error() != "not found" {
- return nil, fmt.Errorf("failed to load current term: %v", err)
- }
-
- // Read the index of the last log entry.
- lastIndex, err := logs.LastIndex()
- if err != nil {
- return nil, fmt.Errorf("failed to find last log: %v", err)
- }
-
- // Get the last log entry.
- var lastLog Log
- if lastIndex > 0 {
- if err = logs.GetLog(lastIndex, &lastLog); err != nil {
- return nil, fmt.Errorf("failed to get last log at index %d: %v", lastIndex, err)
- }
- }
-
- // Make sure we have a valid server address and ID.
- protocolVersion := conf.ProtocolVersion
- localAddr := ServerAddress(trans.LocalAddr())
- localID := conf.LocalID
-
- // TODO (slackpad) - When we deprecate protocol version 2, remove this
- // along with the AddPeer() and RemovePeer() APIs.
- if protocolVersion < 3 && string(localID) != string(localAddr) {
- return nil, fmt.Errorf("when running with ProtocolVersion < 3, LocalID must be set to the network address")
- }
-
- // Create Raft struct.
- r := &Raft{
- protocolVersion: protocolVersion,
- applyCh: make(chan *logFuture),
- conf: *conf,
- fsm: fsm,
- fsmMutateCh: make(chan interface{}, 128),
- fsmSnapshotCh: make(chan *reqSnapshotFuture),
- leaderCh: make(chan bool),
- localID: localID,
- localAddr: localAddr,
- logger: logger,
- logs: logs,
- configurationChangeCh: make(chan *configurationChangeFuture),
- configurations: configurations{},
- rpcCh: trans.Consumer(),
- snapshots: snaps,
- userSnapshotCh: make(chan *userSnapshotFuture),
- userRestoreCh: make(chan *userRestoreFuture),
- shutdownCh: make(chan struct{}),
- stable: stable,
- trans: trans,
- verifyCh: make(chan *verifyFuture, 64),
- configurationsCh: make(chan *configurationsFuture, 8),
- bootstrapCh: make(chan *bootstrapFuture),
- observers: make(map[uint64]*Observer),
- }
-
- // Initialize as a follower.
- r.setState(Follower)
-
- // Start as leader if specified. This should only be used
- // for testing purposes.
- if conf.StartAsLeader {
- r.setState(Leader)
- r.setLeader(r.localAddr)
- }
-
- // Restore the current term and the last log.
- r.setCurrentTerm(currentTerm)
- r.setLastLog(lastLog.Index, lastLog.Term)
-
- // Attempt to restore a snapshot if there are any.
- if err := r.restoreSnapshot(); err != nil {
- return nil, err
- }
-
- // Scan through the log for any configuration change entries.
- snapshotIndex, _ := r.getLastSnapshot()
- for index := snapshotIndex + 1; index <= lastLog.Index; index++ {
- var entry Log
- if err := r.logs.GetLog(index, &entry); err != nil {
- r.logger.Printf("[ERR] raft: Failed to get log at %d: %v", index, err)
- panic(err)
- }
- r.processConfigurationLogEntry(&entry)
- }
- r.logger.Printf("[INFO] raft: Initial configuration (index=%d): %+v",
- r.configurations.latestIndex, r.configurations.latest.Servers)
-
- // Setup a heartbeat fast-path to avoid head-of-line
- // blocking where possible. It MUST be safe for this
- // to be called concurrently with a blocking RPC.
- trans.SetHeartbeatHandler(r.processHeartbeat)
-
- // Start the background work.
- r.goFunc(r.run)
- r.goFunc(r.runFSM)
- r.goFunc(r.runSnapshots)
- return r, nil
-}
-
-// restoreSnapshot attempts to restore the latest snapshots, and fails if none
-// of them can be restored. This is called at initialization time, and is
-// completely unsafe to call at any other time.
-func (r *Raft) restoreSnapshot() error {
- snapshots, err := r.snapshots.List()
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to list snapshots: %v", err)
- return err
- }
-
- // Try to load in order of newest to oldest
- for _, snapshot := range snapshots {
- _, source, err := r.snapshots.Open(snapshot.ID)
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to open snapshot %v: %v", snapshot.ID, err)
- continue
- }
- defer source.Close()
-
- if err := r.fsm.Restore(source); err != nil {
- r.logger.Printf("[ERR] raft: Failed to restore snapshot %v: %v", snapshot.ID, err)
- continue
- }
-
- // Log success
- r.logger.Printf("[INFO] raft: Restored from snapshot %v", snapshot.ID)
-
- // Update the lastApplied so we don't replay old logs
- r.setLastApplied(snapshot.Index)
-
- // Update the last stable snapshot info
- r.setLastSnapshot(snapshot.Index, snapshot.Term)
-
- // Update the configuration
- if snapshot.Version > 0 {
- r.configurations.committed = snapshot.Configuration
- r.configurations.committedIndex = snapshot.ConfigurationIndex
- r.configurations.latest = snapshot.Configuration
- r.configurations.latestIndex = snapshot.ConfigurationIndex
- } else {
- configuration := decodePeers(snapshot.Peers, r.trans)
- r.configurations.committed = configuration
- r.configurations.committedIndex = snapshot.Index
- r.configurations.latest = configuration
- r.configurations.latestIndex = snapshot.Index
- }
-
- // Success!
- return nil
- }
-
- // If we had snapshots and failed to load them, its an error
- if len(snapshots) > 0 {
- return fmt.Errorf("failed to load any existing snapshots")
- }
- return nil
-}
-
-// BootstrapCluster is equivalent to non-member BootstrapCluster but can be
-// called on an un-bootstrapped Raft instance after it has been created. This
-// should only be called at the beginning of time for the cluster, and you
-// absolutely must make sure that you call it with the same configuration on all
-// the Voter servers. There is no need to bootstrap Nonvoter and Staging
-// servers.
-func (r *Raft) BootstrapCluster(configuration Configuration) Future {
- bootstrapReq := &bootstrapFuture{}
- bootstrapReq.init()
- bootstrapReq.configuration = configuration
- select {
- case <-r.shutdownCh:
- return errorFuture{ErrRaftShutdown}
- case r.bootstrapCh <- bootstrapReq:
- return bootstrapReq
- }
-}
-
-// Leader is used to return the current leader of the cluster.
-// It may return empty string if there is no current leader
-// or the leader is unknown.
-func (r *Raft) Leader() ServerAddress {
- r.leaderLock.RLock()
- leader := r.leader
- r.leaderLock.RUnlock()
- return leader
-}
-
-// Apply is used to apply a command to the FSM in a highly consistent
-// manner. This returns a future that can be used to wait on the application.
-// An optional timeout can be provided to limit the amount of time we wait
-// for the command to be started. This must be run on the leader or it
-// will fail.
-func (r *Raft) Apply(cmd []byte, timeout time.Duration) ApplyFuture {
- metrics.IncrCounter([]string{"raft", "apply"}, 1)
- var timer <-chan time.Time
- if timeout > 0 {
- timer = time.After(timeout)
- }
-
- // Create a log future, no index or term yet
- logFuture := &logFuture{
- log: Log{
- Type: LogCommand,
- Data: cmd,
- },
- }
- logFuture.init()
-
- select {
- case <-timer:
- return errorFuture{ErrEnqueueTimeout}
- case <-r.shutdownCh:
- return errorFuture{ErrRaftShutdown}
- case r.applyCh <- logFuture:
- return logFuture
- }
-}
-
-// Barrier is used to issue a command that blocks until all preceeding
-// operations have been applied to the FSM. It can be used to ensure the
-// FSM reflects all queued writes. An optional timeout can be provided to
-// limit the amount of time we wait for the command to be started. This
-// must be run on the leader or it will fail.
-func (r *Raft) Barrier(timeout time.Duration) Future {
- metrics.IncrCounter([]string{"raft", "barrier"}, 1)
- var timer <-chan time.Time
- if timeout > 0 {
- timer = time.After(timeout)
- }
-
- // Create a log future, no index or term yet
- logFuture := &logFuture{
- log: Log{
- Type: LogBarrier,
- },
- }
- logFuture.init()
-
- select {
- case <-timer:
- return errorFuture{ErrEnqueueTimeout}
- case <-r.shutdownCh:
- return errorFuture{ErrRaftShutdown}
- case r.applyCh <- logFuture:
- return logFuture
- }
-}
-
-// VerifyLeader is used to ensure the current node is still
-// the leader. This can be done to prevent stale reads when a
-// new leader has potentially been elected.
-func (r *Raft) VerifyLeader() Future {
- metrics.IncrCounter([]string{"raft", "verify_leader"}, 1)
- verifyFuture := &verifyFuture{}
- verifyFuture.init()
- select {
- case <-r.shutdownCh:
- return errorFuture{ErrRaftShutdown}
- case r.verifyCh <- verifyFuture:
- return verifyFuture
- }
-}
-
-// GetConfiguration returns the latest configuration and its associated index
-// currently in use. This may not yet be committed. This must not be called on
-// the main thread (which can access the information directly).
-func (r *Raft) GetConfiguration() ConfigurationFuture {
- configReq := &configurationsFuture{}
- configReq.init()
- select {
- case <-r.shutdownCh:
- configReq.respond(ErrRaftShutdown)
- return configReq
- case r.configurationsCh <- configReq:
- return configReq
- }
-}
-
-// AddPeer (deprecated) is used to add a new peer into the cluster. This must be
-// run on the leader or it will fail. Use AddVoter/AddNonvoter instead.
-func (r *Raft) AddPeer(peer ServerAddress) Future {
- if r.protocolVersion > 2 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: AddStaging,
- serverID: ServerID(peer),
- serverAddress: peer,
- prevIndex: 0,
- }, 0)
-}
-
-// RemovePeer (deprecated) is used to remove a peer from the cluster. If the
-// current leader is being removed, it will cause a new election
-// to occur. This must be run on the leader or it will fail.
-// Use RemoveServer instead.
-func (r *Raft) RemovePeer(peer ServerAddress) Future {
- if r.protocolVersion > 2 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: RemoveServer,
- serverID: ServerID(peer),
- prevIndex: 0,
- }, 0)
-}
-
-// AddVoter will add the given server to the cluster as a staging server. If the
-// server is already in the cluster as a voter, this does nothing. This must be
-// run on the leader or it will fail. The leader will promote the staging server
-// to a voter once that server is ready. If nonzero, prevIndex is the index of
-// the only configuration upon which this change may be applied; if another
-// configuration entry has been added in the meantime, this request will fail.
-// If nonzero, timeout is how long this server should wait before the
-// configuration change log entry is appended.
-func (r *Raft) AddVoter(id ServerID, address ServerAddress, prevIndex uint64, timeout time.Duration) IndexFuture {
- if r.protocolVersion < 2 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: AddStaging,
- serverID: id,
- serverAddress: address,
- prevIndex: prevIndex,
- }, timeout)
-}
-
-// AddNonvoter will add the given server to the cluster but won't assign it a
-// vote. The server will receive log entries, but it won't participate in
-// elections or log entry commitment. If the server is already in the cluster as
-// a staging server or voter, this does nothing. This must be run on the leader
-// or it will fail. For prevIndex and timeout, see AddVoter.
-func (r *Raft) AddNonvoter(id ServerID, address ServerAddress, prevIndex uint64, timeout time.Duration) IndexFuture {
- if r.protocolVersion < 3 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: AddNonvoter,
- serverID: id,
- serverAddress: address,
- prevIndex: prevIndex,
- }, timeout)
-}
-
-// RemoveServer will remove the given server from the cluster. If the current
-// leader is being removed, it will cause a new election to occur. This must be
-// run on the leader or it will fail. For prevIndex and timeout, see AddVoter.
-func (r *Raft) RemoveServer(id ServerID, prevIndex uint64, timeout time.Duration) IndexFuture {
- if r.protocolVersion < 2 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: RemoveServer,
- serverID: id,
- prevIndex: prevIndex,
- }, timeout)
-}
-
-// DemoteVoter will take away a server's vote, if it has one. If present, the
-// server will continue to receive log entries, but it won't participate in
-// elections or log entry commitment. If the server is not in the cluster, this
-// does nothing. This must be run on the leader or it will fail. For prevIndex
-// and timeout, see AddVoter.
-func (r *Raft) DemoteVoter(id ServerID, prevIndex uint64, timeout time.Duration) IndexFuture {
- if r.protocolVersion < 3 {
- return errorFuture{ErrUnsupportedProtocol}
- }
-
- return r.requestConfigChange(configurationChangeRequest{
- command: DemoteVoter,
- serverID: id,
- prevIndex: prevIndex,
- }, timeout)
-}
-
-// Shutdown is used to stop the Raft background routines.
-// This is not a graceful operation. Provides a future that
-// can be used to block until all background routines have exited.
-func (r *Raft) Shutdown() Future {
- r.shutdownLock.Lock()
- defer r.shutdownLock.Unlock()
-
- if !r.shutdown {
- close(r.shutdownCh)
- r.shutdown = true
- r.setState(Shutdown)
- return &shutdownFuture{r}
- }
-
- // avoid closing transport twice
- return &shutdownFuture{nil}
-}
-
-// Snapshot is used to manually force Raft to take a snapshot. Returns a future
-// that can be used to block until complete, and that contains a function that
-// can be used to open the snapshot.
-func (r *Raft) Snapshot() SnapshotFuture {
- future := &userSnapshotFuture{}
- future.init()
- select {
- case r.userSnapshotCh <- future:
- return future
- case <-r.shutdownCh:
- future.respond(ErrRaftShutdown)
- return future
- }
-}
-
-// Restore is used to manually force Raft to consume an external snapshot, such
-// as if restoring from a backup. We will use the current Raft configuration,
-// not the one from the snapshot, so that we can restore into a new cluster. We
-// will also use the higher of the index of the snapshot, or the current index,
-// and then add 1 to that, so we force a new state with a hole in the Raft log,
-// so that the snapshot will be sent to followers and used for any new joiners.
-// This can only be run on the leader, and blocks until the restore is complete
-// or an error occurs.
-//
-// WARNING! This operation has the leader take on the state of the snapshot and
-// then sets itself up so that it replicates that to its followers though the
-// install snapshot process. This involves a potentially dangerous period where
-// the leader commits ahead of its followers, so should only be used for disaster
-// recovery into a fresh cluster, and should not be used in normal operations.
-func (r *Raft) Restore(meta *SnapshotMeta, reader io.Reader, timeout time.Duration) error {
- metrics.IncrCounter([]string{"raft", "restore"}, 1)
- var timer <-chan time.Time
- if timeout > 0 {
- timer = time.After(timeout)
- }
-
- // Perform the restore.
- restore := &userRestoreFuture{
- meta: meta,
- reader: reader,
- }
- restore.init()
- select {
- case <-timer:
- return ErrEnqueueTimeout
- case <-r.shutdownCh:
- return ErrRaftShutdown
- case r.userRestoreCh <- restore:
- // If the restore is ingested then wait for it to complete.
- if err := restore.Error(); err != nil {
- return err
- }
- }
-
- // Apply a no-op log entry. Waiting for this allows us to wait until the
- // followers have gotten the restore and replicated at least this new
- // entry, which shows that we've also faulted and installed the
- // snapshot with the contents of the restore.
- noop := &logFuture{
- log: Log{
- Type: LogNoop,
- },
- }
- noop.init()
- select {
- case <-timer:
- return ErrEnqueueTimeout
- case <-r.shutdownCh:
- return ErrRaftShutdown
- case r.applyCh <- noop:
- return noop.Error()
- }
-}
-
-// State is used to return the current raft state.
-func (r *Raft) State() RaftState {
- return r.getState()
-}
-
-// LeaderCh is used to get a channel which delivers signals on
-// acquiring or losing leadership. It sends true if we become
-// the leader, and false if we lose it. The channel is not buffered,
-// and does not block on writes.
-func (r *Raft) LeaderCh() <-chan bool {
- return r.leaderCh
-}
-
-// String returns a string representation of this Raft node.
-func (r *Raft) String() string {
- return fmt.Sprintf("Node at %s [%v]", r.localAddr, r.getState())
-}
-
-// LastContact returns the time of last contact by a leader.
-// This only makes sense if we are currently a follower.
-func (r *Raft) LastContact() time.Time {
- r.lastContactLock.RLock()
- last := r.lastContact
- r.lastContactLock.RUnlock()
- return last
-}
-
-// Stats is used to return a map of various internal stats. This
-// should only be used for informative purposes or debugging.
-//
-// Keys are: "state", "term", "last_log_index", "last_log_term",
-// "commit_index", "applied_index", "fsm_pending",
-// "last_snapshot_index", "last_snapshot_term",
-// "latest_configuration", "last_contact", and "num_peers".
-//
-// The value of "state" is a numerical value representing a
-// RaftState const.
-//
-// The value of "latest_configuration" is a string which contains
-// the id of each server, its suffrage status, and its address.
-//
-// The value of "last_contact" is either "never" if there
-// has been no contact with a leader, "0" if the node is in the
-// leader state, or the time since last contact with a leader
-// formatted as a string.
-//
-// The value of "num_peers" is the number of other voting servers in the
-// cluster, not including this node. If this node isn't part of the
-// configuration then this will be "0".
-//
-// All other values are uint64s, formatted as strings.
-func (r *Raft) Stats() map[string]string {
- toString := func(v uint64) string {
- return strconv.FormatUint(v, 10)
- }
- lastLogIndex, lastLogTerm := r.getLastLog()
- lastSnapIndex, lastSnapTerm := r.getLastSnapshot()
- s := map[string]string{
- "state": r.getState().String(),
- "term": toString(r.getCurrentTerm()),
- "last_log_index": toString(lastLogIndex),
- "last_log_term": toString(lastLogTerm),
- "commit_index": toString(r.getCommitIndex()),
- "applied_index": toString(r.getLastApplied()),
- "fsm_pending": toString(uint64(len(r.fsmMutateCh))),
- "last_snapshot_index": toString(lastSnapIndex),
- "last_snapshot_term": toString(lastSnapTerm),
- "protocol_version": toString(uint64(r.protocolVersion)),
- "protocol_version_min": toString(uint64(ProtocolVersionMin)),
- "protocol_version_max": toString(uint64(ProtocolVersionMax)),
- "snapshot_version_min": toString(uint64(SnapshotVersionMin)),
- "snapshot_version_max": toString(uint64(SnapshotVersionMax)),
- }
-
- future := r.GetConfiguration()
- if err := future.Error(); err != nil {
- r.logger.Printf("[WARN] raft: could not get configuration for Stats: %v", err)
- } else {
- configuration := future.Configuration()
- s["latest_configuration_index"] = toString(future.Index())
- s["latest_configuration"] = fmt.Sprintf("%+v", configuration.Servers)
-
- // This is a legacy metric that we've seen people use in the wild.
- hasUs := false
- numPeers := 0
- for _, server := range configuration.Servers {
- if server.Suffrage == Voter {
- if server.ID == r.localID {
- hasUs = true
- } else {
- numPeers++
- }
- }
- }
- if !hasUs {
- numPeers = 0
- }
- s["num_peers"] = toString(uint64(numPeers))
- }
-
- last := r.LastContact()
- if last.IsZero() {
- s["last_contact"] = "never"
- } else if r.getState() == Leader {
- s["last_contact"] = "0"
- } else {
- s["last_contact"] = fmt.Sprintf("%v", time.Now().Sub(last))
- }
- return s
-}
-
-// LastIndex returns the last index in stable storage,
-// either from the last log or from the last snapshot.
-func (r *Raft) LastIndex() uint64 {
- return r.getLastIndex()
-}
-
-// AppliedIndex returns the last index applied to the FSM. This is generally
-// lagging behind the last index, especially for indexes that are persisted but
-// have not yet been considered committed by the leader. NOTE - this reflects
-// the last index that was sent to the application's FSM over the apply channel
-// but DOES NOT mean that the application's FSM has yet consumed it and applied
-// it to its internal state. Thus, the application's state may lag behind this
-// index.
-func (r *Raft) AppliedIndex() uint64 {
- return r.getLastApplied()
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commands.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commands.go
deleted file mode 100644
index 5d89e7bc..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commands.go
+++ /dev/null
@@ -1,151 +0,0 @@
-package raft
-
-// RPCHeader is a common sub-structure used to pass along protocol version and
-// other information about the cluster. For older Raft implementations before
-// versioning was added this will default to a zero-valued structure when read
-// by newer Raft versions.
-type RPCHeader struct {
- // ProtocolVersion is the version of the protocol the sender is
- // speaking.
- ProtocolVersion ProtocolVersion
-}
-
-// WithRPCHeader is an interface that exposes the RPC header.
-type WithRPCHeader interface {
- GetRPCHeader() RPCHeader
-}
-
-// AppendEntriesRequest is the command used to append entries to the
-// replicated log.
-type AppendEntriesRequest struct {
- RPCHeader
-
- // Provide the current term and leader
- Term uint64
- Leader []byte
-
- // Provide the previous entries for integrity checking
- PrevLogEntry uint64
- PrevLogTerm uint64
-
- // New entries to commit
- Entries []*Log
-
- // Commit index on the leader
- LeaderCommitIndex uint64
-}
-
-// See WithRPCHeader.
-func (r *AppendEntriesRequest) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
-
-// AppendEntriesResponse is the response returned from an
-// AppendEntriesRequest.
-type AppendEntriesResponse struct {
- RPCHeader
-
- // Newer term if leader is out of date
- Term uint64
-
- // Last Log is a hint to help accelerate rebuilding slow nodes
- LastLog uint64
-
- // We may not succeed if we have a conflicting entry
- Success bool
-
- // There are scenarios where this request didn't succeed
- // but there's no need to wait/back-off the next attempt.
- NoRetryBackoff bool
-}
-
-// See WithRPCHeader.
-func (r *AppendEntriesResponse) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
-
-// RequestVoteRequest is the command used by a candidate to ask a Raft peer
-// for a vote in an election.
-type RequestVoteRequest struct {
- RPCHeader
-
- // Provide the term and our id
- Term uint64
- Candidate []byte
-
- // Used to ensure safety
- LastLogIndex uint64
- LastLogTerm uint64
-}
-
-// See WithRPCHeader.
-func (r *RequestVoteRequest) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
-
-// RequestVoteResponse is the response returned from a RequestVoteRequest.
-type RequestVoteResponse struct {
- RPCHeader
-
- // Newer term if leader is out of date.
- Term uint64
-
- // Peers is deprecated, but required by servers that only understand
- // protocol version 0. This is not populated in protocol version 2
- // and later.
- Peers []byte
-
- // Is the vote granted.
- Granted bool
-}
-
-// See WithRPCHeader.
-func (r *RequestVoteResponse) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
-
-// InstallSnapshotRequest is the command sent to a Raft peer to bootstrap its
-// log (and state machine) from a snapshot on another peer.
-type InstallSnapshotRequest struct {
- RPCHeader
- SnapshotVersion SnapshotVersion
-
- Term uint64
- Leader []byte
-
- // These are the last index/term included in the snapshot
- LastLogIndex uint64
- LastLogTerm uint64
-
- // Peer Set in the snapshot. This is deprecated in favor of Configuration
- // but remains here in case we receive an InstallSnapshot from a leader
- // that's running old code.
- Peers []byte
-
- // Cluster membership.
- Configuration []byte
- // Log index where 'Configuration' entry was originally written.
- ConfigurationIndex uint64
-
- // Size of the snapshot
- Size int64
-}
-
-// See WithRPCHeader.
-func (r *InstallSnapshotRequest) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
-
-// InstallSnapshotResponse is the response returned from an
-// InstallSnapshotRequest.
-type InstallSnapshotResponse struct {
- RPCHeader
-
- Term uint64
- Success bool
-}
-
-// See WithRPCHeader.
-func (r *InstallSnapshotResponse) GetRPCHeader() RPCHeader {
- return r.RPCHeader
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commitment.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commitment.go
deleted file mode 100644
index b5ba2634..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/commitment.go
+++ /dev/null
@@ -1,101 +0,0 @@
-package raft
-
-import (
- "sort"
- "sync"
-)
-
-// Commitment is used to advance the leader's commit index. The leader and
-// replication goroutines report in newly written entries with Match(), and
-// this notifies on commitCh when the commit index has advanced.
-type commitment struct {
- // protectes matchIndexes and commitIndex
- sync.Mutex
- // notified when commitIndex increases
- commitCh chan struct{}
- // voter ID to log index: the server stores up through this log entry
- matchIndexes map[ServerID]uint64
- // a quorum stores up through this log entry. monotonically increases.
- commitIndex uint64
- // the first index of this leader's term: this needs to be replicated to a
- // majority of the cluster before this leader may mark anything committed
- // (per Raft's commitment rule)
- startIndex uint64
-}
-
-// newCommitment returns an commitment struct that notifies the provided
-// channel when log entries have been committed. A new commitment struct is
-// created each time this server becomes leader for a particular term.
-// 'configuration' is the servers in the cluster.
-// 'startIndex' is the first index created in this term (see
-// its description above).
-func newCommitment(commitCh chan struct{}, configuration Configuration, startIndex uint64) *commitment {
- matchIndexes := make(map[ServerID]uint64)
- for _, server := range configuration.Servers {
- if server.Suffrage == Voter {
- matchIndexes[server.ID] = 0
- }
- }
- return &commitment{
- commitCh: commitCh,
- matchIndexes: matchIndexes,
- commitIndex: 0,
- startIndex: startIndex,
- }
-}
-
-// Called when a new cluster membership configuration is created: it will be
-// used to determine commitment from now on. 'configuration' is the servers in
-// the cluster.
-func (c *commitment) setConfiguration(configuration Configuration) {
- c.Lock()
- defer c.Unlock()
- oldMatchIndexes := c.matchIndexes
- c.matchIndexes = make(map[ServerID]uint64)
- for _, server := range configuration.Servers {
- if server.Suffrage == Voter {
- c.matchIndexes[server.ID] = oldMatchIndexes[server.ID] // defaults to 0
- }
- }
- c.recalculate()
-}
-
-// Called by leader after commitCh is notified
-func (c *commitment) getCommitIndex() uint64 {
- c.Lock()
- defer c.Unlock()
- return c.commitIndex
-}
-
-// Match is called once a server completes writing entries to disk: either the
-// leader has written the new entry or a follower has replied to an
-// AppendEntries RPC. The given server's disk agrees with this server's log up
-// through the given index.
-func (c *commitment) match(server ServerID, matchIndex uint64) {
- c.Lock()
- defer c.Unlock()
- if prev, hasVote := c.matchIndexes[server]; hasVote && matchIndex > prev {
- c.matchIndexes[server] = matchIndex
- c.recalculate()
- }
-}
-
-// Internal helper to calculate new commitIndex from matchIndexes.
-// Must be called with lock held.
-func (c *commitment) recalculate() {
- if len(c.matchIndexes) == 0 {
- return
- }
-
- matched := make([]uint64, 0, len(c.matchIndexes))
- for _, idx := range c.matchIndexes {
- matched = append(matched, idx)
- }
- sort.Sort(uint64Slice(matched))
- quorumMatchIndex := matched[(len(matched)-1)/2]
-
- if quorumMatchIndex > c.commitIndex && quorumMatchIndex >= c.startIndex {
- c.commitIndex = quorumMatchIndex
- asyncNotifyCh(c.commitCh)
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/config.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/config.go
deleted file mode 100644
index c1ce03ac..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/config.go
+++ /dev/null
@@ -1,258 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
- "log"
- "time"
-)
-
-// These are the versions of the protocol (which includes RPC messages as
-// well as Raft-specific log entries) that this server can _understand_. Use
-// the ProtocolVersion member of the Config object to control the version of
-// the protocol to use when _speaking_ to other servers. Note that depending on
-// the protocol version being spoken, some otherwise understood RPC messages
-// may be refused. See dispositionRPC for details of this logic.
-//
-// There are notes about the upgrade path in the description of the versions
-// below. If you are starting a fresh cluster then there's no reason not to
-// jump right to the latest protocol version. If you need to interoperate with
-// older, version 0 Raft servers you'll need to drive the cluster through the
-// different versions in order.
-//
-// The version details are complicated, but here's a summary of what's required
-// to get from a version 0 cluster to version 3:
-//
-// 1. In version N of your app that starts using the new Raft library with
-// versioning, set ProtocolVersion to 1.
-// 2. Make version N+1 of your app require version N as a prerequisite (all
-// servers must be upgraded). For version N+1 of your app set ProtocolVersion
-// to 2.
-// 3. Similarly, make version N+2 of your app require version N+1 as a
-// prerequisite. For version N+2 of your app, set ProtocolVersion to 3.
-//
-// During this upgrade, older cluster members will still have Server IDs equal
-// to their network addresses. To upgrade an older member and give it an ID, it
-// needs to leave the cluster and re-enter:
-//
-// 1. Remove the server from the cluster with RemoveServer, using its network
-// address as its ServerID.
-// 2. Update the server's config to a better ID (restarting the server).
-// 3. Add the server back to the cluster with AddVoter, using its new ID.
-//
-// You can do this during the rolling upgrade from N+1 to N+2 of your app, or
-// as a rolling change at any time after the upgrade.
-//
-// Version History
-//
-// 0: Original Raft library before versioning was added. Servers running this
-// version of the Raft library use AddPeerDeprecated/RemovePeerDeprecated
-// for all configuration changes, and have no support for LogConfiguration.
-// 1: First versioned protocol, used to interoperate with old servers, and begin
-// the migration path to newer versions of the protocol. Under this version
-// all configuration changes are propagated using the now-deprecated
-// RemovePeerDeprecated Raft log entry. This means that server IDs are always
-// set to be the same as the server addresses (since the old log entry type
-// cannot transmit an ID), and only AddPeer/RemovePeer APIs are supported.
-// Servers running this version of the protocol can understand the new
-// LogConfiguration Raft log entry but will never generate one so they can
-// remain compatible with version 0 Raft servers in the cluster.
-// 2: Transitional protocol used when migrating an existing cluster to the new
-// server ID system. Server IDs are still set to be the same as server
-// addresses, but all configuration changes are propagated using the new
-// LogConfiguration Raft log entry type, which can carry full ID information.
-// This version supports the old AddPeer/RemovePeer APIs as well as the new
-// ID-based AddVoter/RemoveServer APIs which should be used when adding
-// version 3 servers to the cluster later. This version sheds all
-// interoperability with version 0 servers, but can interoperate with newer
-// Raft servers running with protocol version 1 since they can understand the
-// new LogConfiguration Raft log entry, and this version can still understand
-// their RemovePeerDeprecated Raft log entries. We need this protocol version
-// as an intermediate step between 1 and 3 so that servers will propagate the
-// ID information that will come from newly-added (or -rolled) servers using
-// protocol version 3, but since they are still using their address-based IDs
-// from the previous step they will still be able to track commitments and
-// their own voting status properly. If we skipped this step, servers would
-// be started with their new IDs, but they wouldn't see themselves in the old
-// address-based configuration, so none of the servers would think they had a
-// vote.
-// 3: Protocol adding full support for server IDs and new ID-based server APIs
-// (AddVoter, AddNonvoter, etc.), old AddPeer/RemovePeer APIs are no longer
-// supported. Version 2 servers should be swapped out by removing them from
-// the cluster one-by-one and re-adding them with updated configuration for
-// this protocol version, along with their server ID. The remove/add cycle
-// is required to populate their server ID. Note that removing must be done
-// by ID, which will be the old server's address.
-type ProtocolVersion int
-
-const (
- ProtocolVersionMin ProtocolVersion = 0
- ProtocolVersionMax = 3
-)
-
-// These are versions of snapshots that this server can _understand_. Currently,
-// it is always assumed that this server generates the latest version, though
-// this may be changed in the future to include a configurable version.
-//
-// Version History
-//
-// 0: Original Raft library before versioning was added. The peers portion of
-// these snapshots is encoded in the legacy format which requires decodePeers
-// to parse. This version of snapshots should only be produced by the
-// unversioned Raft library.
-// 1: New format which adds support for a full configuration structure and its
-// associated log index, with support for server IDs and non-voting server
-// modes. To ease upgrades, this also includes the legacy peers structure but
-// that will never be used by servers that understand version 1 snapshots.
-// Since the original Raft library didn't enforce any versioning, we must
-// include the legacy peers structure for this version, but we can deprecate
-// it in the next snapshot version.
-type SnapshotVersion int
-
-const (
- SnapshotVersionMin SnapshotVersion = 0
- SnapshotVersionMax = 1
-)
-
-// Config provides any necessary configuration for the Raft server.
-type Config struct {
- // ProtocolVersion allows a Raft server to inter-operate with older
- // Raft servers running an older version of the code. This is used to
- // version the wire protocol as well as Raft-specific log entries that
- // the server uses when _speaking_ to other servers. There is currently
- // no auto-negotiation of versions so all servers must be manually
- // configured with compatible versions. See ProtocolVersionMin and
- // ProtocolVersionMax for the versions of the protocol that this server
- // can _understand_.
- ProtocolVersion ProtocolVersion
-
- // HeartbeatTimeout specifies the time in follower state without
- // a leader before we attempt an election.
- HeartbeatTimeout time.Duration
-
- // ElectionTimeout specifies the time in candidate state without
- // a leader before we attempt an election.
- ElectionTimeout time.Duration
-
- // CommitTimeout controls the time without an Apply() operation
- // before we heartbeat to ensure a timely commit. Due to random
- // staggering, may be delayed as much as 2x this value.
- CommitTimeout time.Duration
-
- // MaxAppendEntries controls the maximum number of append entries
- // to send at once. We want to strike a balance between efficiency
- // and avoiding waste if the follower is going to reject because of
- // an inconsistent log.
- MaxAppendEntries int
-
- // If we are a member of a cluster, and RemovePeer is invoked for the
- // local node, then we forget all peers and transition into the follower state.
- // If ShutdownOnRemove is is set, we additional shutdown Raft. Otherwise,
- // we can become a leader of a cluster containing only this node.
- ShutdownOnRemove bool
-
- // TrailingLogs controls how many logs we leave after a snapshot. This is
- // used so that we can quickly replay logs on a follower instead of being
- // forced to send an entire snapshot.
- TrailingLogs uint64
-
- // SnapshotInterval controls how often we check if we should perform a snapshot.
- // We randomly stagger between this value and 2x this value to avoid the entire
- // cluster from performing a snapshot at once.
- SnapshotInterval time.Duration
-
- // SnapshotThreshold controls how many outstanding logs there must be before
- // we perform a snapshot. This is to prevent excessive snapshots when we can
- // just replay a small set of logs.
- SnapshotThreshold uint64
-
- // LeaderLeaseTimeout is used to control how long the "lease" lasts
- // for being the leader without being able to contact a quorum
- // of nodes. If we reach this interval without contact, we will
- // step down as leader.
- LeaderLeaseTimeout time.Duration
-
- // StartAsLeader forces Raft to start in the leader state. This should
- // never be used except for testing purposes, as it can cause a split-brain.
- StartAsLeader bool
-
- // The unique ID for this server across all time. When running with
- // ProtocolVersion < 3, you must set this to be the same as the network
- // address of your transport.
- LocalID ServerID
-
- // NotifyCh is used to provide a channel that will be notified of leadership
- // changes. Raft will block writing to this channel, so it should either be
- // buffered or aggressively consumed.
- NotifyCh chan<- bool
-
- // LogOutput is used as a sink for logs, unless Logger is specified.
- // Defaults to os.Stderr.
- LogOutput io.Writer
-
- // Logger is a user-provided logger. If nil, a logger writing to LogOutput
- // is used.
- Logger *log.Logger
-}
-
-// DefaultConfig returns a Config with usable defaults.
-func DefaultConfig() *Config {
- return &Config{
- ProtocolVersion: ProtocolVersionMax,
- HeartbeatTimeout: 1000 * time.Millisecond,
- ElectionTimeout: 1000 * time.Millisecond,
- CommitTimeout: 50 * time.Millisecond,
- MaxAppendEntries: 64,
- ShutdownOnRemove: true,
- TrailingLogs: 10240,
- SnapshotInterval: 120 * time.Second,
- SnapshotThreshold: 8192,
- LeaderLeaseTimeout: 500 * time.Millisecond,
- }
-}
-
-// ValidateConfig is used to validate a sane configuration
-func ValidateConfig(config *Config) error {
- // We don't actually support running as 0 in the library any more, but
- // we do understand it.
- protocolMin := ProtocolVersionMin
- if protocolMin == 0 {
- protocolMin = 1
- }
- if config.ProtocolVersion < protocolMin ||
- config.ProtocolVersion > ProtocolVersionMax {
- return fmt.Errorf("Protocol version %d must be >= %d and <= %d",
- config.ProtocolVersion, protocolMin, ProtocolVersionMax)
- }
- if len(config.LocalID) == 0 {
- return fmt.Errorf("LocalID cannot be empty")
- }
- if config.HeartbeatTimeout < 5*time.Millisecond {
- return fmt.Errorf("Heartbeat timeout is too low")
- }
- if config.ElectionTimeout < 5*time.Millisecond {
- return fmt.Errorf("Election timeout is too low")
- }
- if config.CommitTimeout < time.Millisecond {
- return fmt.Errorf("Commit timeout is too low")
- }
- if config.MaxAppendEntries <= 0 {
- return fmt.Errorf("MaxAppendEntries must be positive")
- }
- if config.MaxAppendEntries > 1024 {
- return fmt.Errorf("MaxAppendEntries is too large")
- }
- if config.SnapshotInterval < 5*time.Millisecond {
- return fmt.Errorf("Snapshot interval is too low")
- }
- if config.LeaderLeaseTimeout < 5*time.Millisecond {
- return fmt.Errorf("Leader lease timeout is too low")
- }
- if config.LeaderLeaseTimeout > config.HeartbeatTimeout {
- return fmt.Errorf("Leader lease timeout cannot be larger than heartbeat timeout")
- }
- if config.ElectionTimeout < config.HeartbeatTimeout {
- return fmt.Errorf("Election timeout must be equal or greater than Heartbeat Timeout")
- }
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/configuration.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/configuration.go
deleted file mode 100644
index 74508c5e..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/configuration.go
+++ /dev/null
@@ -1,343 +0,0 @@
-package raft
-
-import "fmt"
-
-// ServerSuffrage determines whether a Server in a Configuration gets a vote.
-type ServerSuffrage int
-
-// Note: Don't renumber these, since the numbers are written into the log.
-const (
- // Voter is a server whose vote is counted in elections and whose match index
- // is used in advancing the leader's commit index.
- Voter ServerSuffrage = iota
- // Nonvoter is a server that receives log entries but is not considered for
- // elections or commitment purposes.
- Nonvoter
- // Staging is a server that acts like a nonvoter with one exception: once a
- // staging server receives enough log entries to be sufficiently caught up to
- // the leader's log, the leader will invoke a membership change to change
- // the Staging server to a Voter.
- Staging
-)
-
-func (s ServerSuffrage) String() string {
- switch s {
- case Voter:
- return "Voter"
- case Nonvoter:
- return "Nonvoter"
- case Staging:
- return "Staging"
- }
- return "ServerSuffrage"
-}
-
-// ServerID is a unique string identifying a server for all time.
-type ServerID string
-
-// ServerAddress is a network address for a server that a transport can contact.
-type ServerAddress string
-
-// Server tracks the information about a single server in a configuration.
-type Server struct {
- // Suffrage determines whether the server gets a vote.
- Suffrage ServerSuffrage
- // ID is a unique string identifying this server for all time.
- ID ServerID
- // Address is its network address that a transport can contact.
- Address ServerAddress
-}
-
-// Configuration tracks which servers are in the cluster, and whether they have
-// votes. This should include the local server, if it's a member of the cluster.
-// The servers are listed no particular order, but each should only appear once.
-// These entries are appended to the log during membership changes.
-type Configuration struct {
- Servers []Server
-}
-
-// Clone makes a deep copy of a Configuration.
-func (c *Configuration) Clone() (copy Configuration) {
- copy.Servers = append(copy.Servers, c.Servers...)
- return
-}
-
-// ConfigurationChangeCommand is the different ways to change the cluster
-// configuration.
-type ConfigurationChangeCommand uint8
-
-const (
- // AddStaging makes a server Staging unless its Voter.
- AddStaging ConfigurationChangeCommand = iota
- // AddNonvoter makes a server Nonvoter unless its Staging or Voter.
- AddNonvoter
- // DemoteVoter makes a server Nonvoter unless its absent.
- DemoteVoter
- // RemoveServer removes a server entirely from the cluster membership.
- RemoveServer
- // Promote is created automatically by a leader; it turns a Staging server
- // into a Voter.
- Promote
-)
-
-func (c ConfigurationChangeCommand) String() string {
- switch c {
- case AddStaging:
- return "AddStaging"
- case AddNonvoter:
- return "AddNonvoter"
- case DemoteVoter:
- return "DemoteVoter"
- case RemoveServer:
- return "RemoveServer"
- case Promote:
- return "Promote"
- }
- return "ConfigurationChangeCommand"
-}
-
-// configurationChangeRequest describes a change that a leader would like to
-// make to its current configuration. It's used only within a single server
-// (never serialized into the log), as part of `configurationChangeFuture`.
-type configurationChangeRequest struct {
- command ConfigurationChangeCommand
- serverID ServerID
- serverAddress ServerAddress // only present for AddStaging, AddNonvoter
- // prevIndex, if nonzero, is the index of the only configuration upon which
- // this change may be applied; if another configuration entry has been
- // added in the meantime, this request will fail.
- prevIndex uint64
-}
-
-// configurations is state tracked on every server about its Configurations.
-// Note that, per Diego's dissertation, there can be at most one uncommitted
-// configuration at a time (the next configuration may not be created until the
-// prior one has been committed).
-//
-// One downside to storing just two configurations is that if you try to take a
-// snahpsot when your state machine hasn't yet applied the committedIndex, we
-// have no record of the configuration that would logically fit into that
-// snapshot. We disallow snapshots in that case now. An alternative approach,
-// which LogCabin uses, is to track every configuration change in the
-// log.
-type configurations struct {
- // committed is the latest configuration in the log/snapshot that has been
- // committed (the one with the largest index).
- committed Configuration
- // committedIndex is the log index where 'committed' was written.
- committedIndex uint64
- // latest is the latest configuration in the log/snapshot (may be committed
- // or uncommitted)
- latest Configuration
- // latestIndex is the log index where 'latest' was written.
- latestIndex uint64
-}
-
-// Clone makes a deep copy of a configurations object.
-func (c *configurations) Clone() (copy configurations) {
- copy.committed = c.committed.Clone()
- copy.committedIndex = c.committedIndex
- copy.latest = c.latest.Clone()
- copy.latestIndex = c.latestIndex
- return
-}
-
-// hasVote returns true if the server identified by 'id' is a Voter in the
-// provided Configuration.
-func hasVote(configuration Configuration, id ServerID) bool {
- for _, server := range configuration.Servers {
- if server.ID == id {
- return server.Suffrage == Voter
- }
- }
- return false
-}
-
-// checkConfiguration tests a cluster membership configuration for common
-// errors.
-func checkConfiguration(configuration Configuration) error {
- idSet := make(map[ServerID]bool)
- addressSet := make(map[ServerAddress]bool)
- var voters int
- for _, server := range configuration.Servers {
- if server.ID == "" {
- return fmt.Errorf("Empty ID in configuration: %v", configuration)
- }
- if server.Address == "" {
- return fmt.Errorf("Empty address in configuration: %v", server)
- }
- if idSet[server.ID] {
- return fmt.Errorf("Found duplicate ID in configuration: %v", server.ID)
- }
- idSet[server.ID] = true
- if addressSet[server.Address] {
- return fmt.Errorf("Found duplicate address in configuration: %v", server.Address)
- }
- addressSet[server.Address] = true
- if server.Suffrage == Voter {
- voters++
- }
- }
- if voters == 0 {
- return fmt.Errorf("Need at least one voter in configuration: %v", configuration)
- }
- return nil
-}
-
-// nextConfiguration generates a new Configuration from the current one and a
-// configuration change request. It's split from appendConfigurationEntry so
-// that it can be unit tested easily.
-func nextConfiguration(current Configuration, currentIndex uint64, change configurationChangeRequest) (Configuration, error) {
- if change.prevIndex > 0 && change.prevIndex != currentIndex {
- return Configuration{}, fmt.Errorf("Configuration changed since %v (latest is %v)", change.prevIndex, currentIndex)
- }
-
- configuration := current.Clone()
- switch change.command {
- case AddStaging:
- // TODO: barf on new address?
- newServer := Server{
- // TODO: This should add the server as Staging, to be automatically
- // promoted to Voter later. However, the promoton to Voter is not yet
- // implemented, and doing so is not trivial with the way the leader loop
- // coordinates with the replication goroutines today. So, for now, the
- // server will have a vote right away, and the Promote case below is
- // unused.
- Suffrage: Voter,
- ID: change.serverID,
- Address: change.serverAddress,
- }
- found := false
- for i, server := range configuration.Servers {
- if server.ID == change.serverID {
- if server.Suffrage == Voter {
- configuration.Servers[i].Address = change.serverAddress
- } else {
- configuration.Servers[i] = newServer
- }
- found = true
- break
- }
- }
- if !found {
- configuration.Servers = append(configuration.Servers, newServer)
- }
- case AddNonvoter:
- newServer := Server{
- Suffrage: Nonvoter,
- ID: change.serverID,
- Address: change.serverAddress,
- }
- found := false
- for i, server := range configuration.Servers {
- if server.ID == change.serverID {
- if server.Suffrage != Nonvoter {
- configuration.Servers[i].Address = change.serverAddress
- } else {
- configuration.Servers[i] = newServer
- }
- found = true
- break
- }
- }
- if !found {
- configuration.Servers = append(configuration.Servers, newServer)
- }
- case DemoteVoter:
- for i, server := range configuration.Servers {
- if server.ID == change.serverID {
- configuration.Servers[i].Suffrage = Nonvoter
- break
- }
- }
- case RemoveServer:
- for i, server := range configuration.Servers {
- if server.ID == change.serverID {
- configuration.Servers = append(configuration.Servers[:i], configuration.Servers[i+1:]...)
- break
- }
- }
- case Promote:
- for i, server := range configuration.Servers {
- if server.ID == change.serverID && server.Suffrage == Staging {
- configuration.Servers[i].Suffrage = Voter
- break
- }
- }
- }
-
- // Make sure we didn't do something bad like remove the last voter
- if err := checkConfiguration(configuration); err != nil {
- return Configuration{}, err
- }
-
- return configuration, nil
-}
-
-// encodePeers is used to serialize a Configuration into the old peers format.
-// This is here for backwards compatibility when operating with a mix of old
-// servers and should be removed once we deprecate support for protocol version 1.
-func encodePeers(configuration Configuration, trans Transport) []byte {
- // Gather up all the voters, other suffrage types are not supported by
- // this data format.
- var encPeers [][]byte
- for _, server := range configuration.Servers {
- if server.Suffrage == Voter {
- encPeers = append(encPeers, trans.EncodePeer(server.Address))
- }
- }
-
- // Encode the entire array.
- buf, err := encodeMsgPack(encPeers)
- if err != nil {
- panic(fmt.Errorf("failed to encode peers: %v", err))
- }
-
- return buf.Bytes()
-}
-
-// decodePeers is used to deserialize an old list of peers into a Configuration.
-// This is here for backwards compatibility with old log entries and snapshots;
-// it should be removed eventually.
-func decodePeers(buf []byte, trans Transport) Configuration {
- // Decode the buffer first.
- var encPeers [][]byte
- if err := decodeMsgPack(buf, &encPeers); err != nil {
- panic(fmt.Errorf("failed to decode peers: %v", err))
- }
-
- // Deserialize each peer.
- var servers []Server
- for _, enc := range encPeers {
- p := trans.DecodePeer(enc)
- servers = append(servers, Server{
- Suffrage: Voter,
- ID: ServerID(p),
- Address: ServerAddress(p),
- })
- }
-
- return Configuration{
- Servers: servers,
- }
-}
-
-// encodeConfiguration serializes a Configuration using MsgPack, or panics on
-// errors.
-func encodeConfiguration(configuration Configuration) []byte {
- buf, err := encodeMsgPack(configuration)
- if err != nil {
- panic(fmt.Errorf("failed to encode configuration: %v", err))
- }
- return buf.Bytes()
-}
-
-// decodeConfiguration deserializes a Configuration using MsgPack, or panics on
-// errors.
-func decodeConfiguration(buf []byte) Configuration {
- var configuration Configuration
- if err := decodeMsgPack(buf, &configuration); err != nil {
- panic(fmt.Errorf("failed to decode configuration: %v", err))
- }
- return configuration
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/discard_snapshot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/discard_snapshot.go
deleted file mode 100644
index 5e93a9fe..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/discard_snapshot.go
+++ /dev/null
@@ -1,49 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
-)
-
-// DiscardSnapshotStore is used to successfully snapshot while
-// always discarding the snapshot. This is useful for when the
-// log should be truncated but no snapshot should be retained.
-// This should never be used for production use, and is only
-// suitable for testing.
-type DiscardSnapshotStore struct{}
-
-type DiscardSnapshotSink struct{}
-
-// NewDiscardSnapshotStore is used to create a new DiscardSnapshotStore.
-func NewDiscardSnapshotStore() *DiscardSnapshotStore {
- return &DiscardSnapshotStore{}
-}
-
-func (d *DiscardSnapshotStore) Create(version SnapshotVersion, index, term uint64,
- configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) {
- return &DiscardSnapshotSink{}, nil
-}
-
-func (d *DiscardSnapshotStore) List() ([]*SnapshotMeta, error) {
- return nil, nil
-}
-
-func (d *DiscardSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) {
- return nil, nil, fmt.Errorf("open is not supported")
-}
-
-func (d *DiscardSnapshotSink) Write(b []byte) (int, error) {
- return len(b), nil
-}
-
-func (d *DiscardSnapshotSink) Close() error {
- return nil
-}
-
-func (d *DiscardSnapshotSink) ID() string {
- return "discard"
-}
-
-func (d *DiscardSnapshotSink) Cancel() error {
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/file_snapshot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/file_snapshot.go
deleted file mode 100644
index 17d08013..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/file_snapshot.go
+++ /dev/null
@@ -1,494 +0,0 @@
-package raft
-
-import (
- "bufio"
- "bytes"
- "encoding/json"
- "fmt"
- "hash"
- "hash/crc64"
- "io"
- "io/ioutil"
- "log"
- "os"
- "path/filepath"
- "sort"
- "strings"
- "time"
-)
-
-const (
- testPath = "permTest"
- snapPath = "snapshots"
- metaFilePath = "meta.json"
- stateFilePath = "state.bin"
- tmpSuffix = ".tmp"
-)
-
-// FileSnapshotStore implements the SnapshotStore interface and allows
-// snapshots to be made on the local disk.
-type FileSnapshotStore struct {
- path string
- retain int
- logger *log.Logger
-}
-
-type snapMetaSlice []*fileSnapshotMeta
-
-// FileSnapshotSink implements SnapshotSink with a file.
-type FileSnapshotSink struct {
- store *FileSnapshotStore
- logger *log.Logger
- dir string
- meta fileSnapshotMeta
-
- stateFile *os.File
- stateHash hash.Hash64
- buffered *bufio.Writer
-
- closed bool
-}
-
-// fileSnapshotMeta is stored on disk. We also put a CRC
-// on disk so that we can verify the snapshot.
-type fileSnapshotMeta struct {
- SnapshotMeta
- CRC []byte
-}
-
-// bufferedFile is returned when we open a snapshot. This way
-// reads are buffered and the file still gets closed.
-type bufferedFile struct {
- bh *bufio.Reader
- fh *os.File
-}
-
-func (b *bufferedFile) Read(p []byte) (n int, err error) {
- return b.bh.Read(p)
-}
-
-func (b *bufferedFile) Close() error {
- return b.fh.Close()
-}
-
-// NewFileSnapshotStoreWithLogger creates a new FileSnapshotStore based
-// on a base directory. The `retain` parameter controls how many
-// snapshots are retained. Must be at least 1.
-func NewFileSnapshotStoreWithLogger(base string, retain int, logger *log.Logger) (*FileSnapshotStore, error) {
- if retain < 1 {
- return nil, fmt.Errorf("must retain at least one snapshot")
- }
- if logger == nil {
- logger = log.New(os.Stderr, "", log.LstdFlags)
- }
-
- // Ensure our path exists
- path := filepath.Join(base, snapPath)
- if err := os.MkdirAll(path, 0755); err != nil && !os.IsExist(err) {
- return nil, fmt.Errorf("snapshot path not accessible: %v", err)
- }
-
- // Setup the store
- store := &FileSnapshotStore{
- path: path,
- retain: retain,
- logger: logger,
- }
-
- // Do a permissions test
- if err := store.testPermissions(); err != nil {
- return nil, fmt.Errorf("permissions test failed: %v", err)
- }
- return store, nil
-}
-
-// NewFileSnapshotStore creates a new FileSnapshotStore based
-// on a base directory. The `retain` parameter controls how many
-// snapshots are retained. Must be at least 1.
-func NewFileSnapshotStore(base string, retain int, logOutput io.Writer) (*FileSnapshotStore, error) {
- if logOutput == nil {
- logOutput = os.Stderr
- }
- return NewFileSnapshotStoreWithLogger(base, retain, log.New(logOutput, "", log.LstdFlags))
-}
-
-// testPermissions tries to touch a file in our path to see if it works.
-func (f *FileSnapshotStore) testPermissions() error {
- path := filepath.Join(f.path, testPath)
- fh, err := os.Create(path)
- if err != nil {
- return err
- }
-
- if err = fh.Close(); err != nil {
- return err
- }
-
- if err = os.Remove(path); err != nil {
- return err
- }
- return nil
-}
-
-// snapshotName generates a name for the snapshot.
-func snapshotName(term, index uint64) string {
- now := time.Now()
- msec := now.UnixNano() / int64(time.Millisecond)
- return fmt.Sprintf("%d-%d-%d", term, index, msec)
-}
-
-// Create is used to start a new snapshot
-func (f *FileSnapshotStore) Create(version SnapshotVersion, index, term uint64,
- configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) {
- // We only support version 1 snapshots at this time.
- if version != 1 {
- return nil, fmt.Errorf("unsupported snapshot version %d", version)
- }
-
- // Create a new path
- name := snapshotName(term, index)
- path := filepath.Join(f.path, name+tmpSuffix)
- f.logger.Printf("[INFO] snapshot: Creating new snapshot at %s", path)
-
- // Make the directory
- if err := os.MkdirAll(path, 0755); err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to make snapshot directory: %v", err)
- return nil, err
- }
-
- // Create the sink
- sink := &FileSnapshotSink{
- store: f,
- logger: f.logger,
- dir: path,
- meta: fileSnapshotMeta{
- SnapshotMeta: SnapshotMeta{
- Version: version,
- ID: name,
- Index: index,
- Term: term,
- Peers: encodePeers(configuration, trans),
- Configuration: configuration,
- ConfigurationIndex: configurationIndex,
- },
- CRC: nil,
- },
- }
-
- // Write out the meta data
- if err := sink.writeMeta(); err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to write metadata: %v", err)
- return nil, err
- }
-
- // Open the state file
- statePath := filepath.Join(path, stateFilePath)
- fh, err := os.Create(statePath)
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to create state file: %v", err)
- return nil, err
- }
- sink.stateFile = fh
-
- // Create a CRC64 hash
- sink.stateHash = crc64.New(crc64.MakeTable(crc64.ECMA))
-
- // Wrap both the hash and file in a MultiWriter with buffering
- multi := io.MultiWriter(sink.stateFile, sink.stateHash)
- sink.buffered = bufio.NewWriter(multi)
-
- // Done
- return sink, nil
-}
-
-// List returns available snapshots in the store.
-func (f *FileSnapshotStore) List() ([]*SnapshotMeta, error) {
- // Get the eligible snapshots
- snapshots, err := f.getSnapshots()
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to get snapshots: %v", err)
- return nil, err
- }
-
- var snapMeta []*SnapshotMeta
- for _, meta := range snapshots {
- snapMeta = append(snapMeta, &meta.SnapshotMeta)
- if len(snapMeta) == f.retain {
- break
- }
- }
- return snapMeta, nil
-}
-
-// getSnapshots returns all the known snapshots.
-func (f *FileSnapshotStore) getSnapshots() ([]*fileSnapshotMeta, error) {
- // Get the eligible snapshots
- snapshots, err := ioutil.ReadDir(f.path)
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to scan snapshot dir: %v", err)
- return nil, err
- }
-
- // Populate the metadata
- var snapMeta []*fileSnapshotMeta
- for _, snap := range snapshots {
- // Ignore any files
- if !snap.IsDir() {
- continue
- }
-
- // Ignore any temporary snapshots
- dirName := snap.Name()
- if strings.HasSuffix(dirName, tmpSuffix) {
- f.logger.Printf("[WARN] snapshot: Found temporary snapshot: %v", dirName)
- continue
- }
-
- // Try to read the meta data
- meta, err := f.readMeta(dirName)
- if err != nil {
- f.logger.Printf("[WARN] snapshot: Failed to read metadata for %v: %v", dirName, err)
- continue
- }
-
- // Make sure we can understand this version.
- if meta.Version < SnapshotVersionMin || meta.Version > SnapshotVersionMax {
- f.logger.Printf("[WARN] snapshot: Snapshot version for %v not supported: %d", dirName, meta.Version)
- continue
- }
-
- // Append, but only return up to the retain count
- snapMeta = append(snapMeta, meta)
- }
-
- // Sort the snapshot, reverse so we get new -> old
- sort.Sort(sort.Reverse(snapMetaSlice(snapMeta)))
-
- return snapMeta, nil
-}
-
-// readMeta is used to read the meta data for a given named backup
-func (f *FileSnapshotStore) readMeta(name string) (*fileSnapshotMeta, error) {
- // Open the meta file
- metaPath := filepath.Join(f.path, name, metaFilePath)
- fh, err := os.Open(metaPath)
- if err != nil {
- return nil, err
- }
- defer fh.Close()
-
- // Buffer the file IO
- buffered := bufio.NewReader(fh)
-
- // Read in the JSON
- meta := &fileSnapshotMeta{}
- dec := json.NewDecoder(buffered)
- if err := dec.Decode(meta); err != nil {
- return nil, err
- }
- return meta, nil
-}
-
-// Open takes a snapshot ID and returns a ReadCloser for that snapshot.
-func (f *FileSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) {
- // Get the metadata
- meta, err := f.readMeta(id)
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to get meta data to open snapshot: %v", err)
- return nil, nil, err
- }
-
- // Open the state file
- statePath := filepath.Join(f.path, id, stateFilePath)
- fh, err := os.Open(statePath)
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to open state file: %v", err)
- return nil, nil, err
- }
-
- // Create a CRC64 hash
- stateHash := crc64.New(crc64.MakeTable(crc64.ECMA))
-
- // Compute the hash
- _, err = io.Copy(stateHash, fh)
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to read state file: %v", err)
- fh.Close()
- return nil, nil, err
- }
-
- // Verify the hash
- computed := stateHash.Sum(nil)
- if bytes.Compare(meta.CRC, computed) != 0 {
- f.logger.Printf("[ERR] snapshot: CRC checksum failed (stored: %v computed: %v)",
- meta.CRC, computed)
- fh.Close()
- return nil, nil, fmt.Errorf("CRC mismatch")
- }
-
- // Seek to the start
- if _, err := fh.Seek(0, 0); err != nil {
- f.logger.Printf("[ERR] snapshot: State file seek failed: %v", err)
- fh.Close()
- return nil, nil, err
- }
-
- // Return a buffered file
- buffered := &bufferedFile{
- bh: bufio.NewReader(fh),
- fh: fh,
- }
-
- return &meta.SnapshotMeta, buffered, nil
-}
-
-// ReapSnapshots reaps any snapshots beyond the retain count.
-func (f *FileSnapshotStore) ReapSnapshots() error {
- snapshots, err := f.getSnapshots()
- if err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to get snapshots: %v", err)
- return err
- }
-
- for i := f.retain; i < len(snapshots); i++ {
- path := filepath.Join(f.path, snapshots[i].ID)
- f.logger.Printf("[INFO] snapshot: reaping snapshot %v", path)
- if err := os.RemoveAll(path); err != nil {
- f.logger.Printf("[ERR] snapshot: Failed to reap snapshot %v: %v", path, err)
- return err
- }
- }
- return nil
-}
-
-// ID returns the ID of the snapshot, can be used with Open()
-// after the snapshot is finalized.
-func (s *FileSnapshotSink) ID() string {
- return s.meta.ID
-}
-
-// Write is used to append to the state file. We write to the
-// buffered IO object to reduce the amount of context switches.
-func (s *FileSnapshotSink) Write(b []byte) (int, error) {
- return s.buffered.Write(b)
-}
-
-// Close is used to indicate a successful end.
-func (s *FileSnapshotSink) Close() error {
- // Make sure close is idempotent
- if s.closed {
- return nil
- }
- s.closed = true
-
- // Close the open handles
- if err := s.finalize(); err != nil {
- s.logger.Printf("[ERR] snapshot: Failed to finalize snapshot: %v", err)
- return err
- }
-
- // Write out the meta data
- if err := s.writeMeta(); err != nil {
- s.logger.Printf("[ERR] snapshot: Failed to write metadata: %v", err)
- return err
- }
-
- // Move the directory into place
- newPath := strings.TrimSuffix(s.dir, tmpSuffix)
- if err := os.Rename(s.dir, newPath); err != nil {
- s.logger.Printf("[ERR] snapshot: Failed to move snapshot into place: %v", err)
- return err
- }
-
- // Reap any old snapshots
- if err := s.store.ReapSnapshots(); err != nil {
- return err
- }
-
- return nil
-}
-
-// Cancel is used to indicate an unsuccessful end.
-func (s *FileSnapshotSink) Cancel() error {
- // Make sure close is idempotent
- if s.closed {
- return nil
- }
- s.closed = true
-
- // Close the open handles
- if err := s.finalize(); err != nil {
- s.logger.Printf("[ERR] snapshot: Failed to finalize snapshot: %v", err)
- return err
- }
-
- // Attempt to remove all artifacts
- return os.RemoveAll(s.dir)
-}
-
-// finalize is used to close all of our resources.
-func (s *FileSnapshotSink) finalize() error {
- // Flush any remaining data
- if err := s.buffered.Flush(); err != nil {
- return err
- }
-
- // Get the file size
- stat, statErr := s.stateFile.Stat()
-
- // Close the file
- if err := s.stateFile.Close(); err != nil {
- return err
- }
-
- // Set the file size, check after we close
- if statErr != nil {
- return statErr
- }
- s.meta.Size = stat.Size()
-
- // Set the CRC
- s.meta.CRC = s.stateHash.Sum(nil)
- return nil
-}
-
-// writeMeta is used to write out the metadata we have.
-func (s *FileSnapshotSink) writeMeta() error {
- // Open the meta file
- metaPath := filepath.Join(s.dir, metaFilePath)
- fh, err := os.Create(metaPath)
- if err != nil {
- return err
- }
- defer fh.Close()
-
- // Buffer the file IO
- buffered := bufio.NewWriter(fh)
- defer buffered.Flush()
-
- // Write out as JSON
- enc := json.NewEncoder(buffered)
- if err := enc.Encode(&s.meta); err != nil {
- return err
- }
- return nil
-}
-
-// Implement the sort interface for []*fileSnapshotMeta.
-func (s snapMetaSlice) Len() int {
- return len(s)
-}
-
-func (s snapMetaSlice) Less(i, j int) bool {
- if s[i].Term != s[j].Term {
- return s[i].Term < s[j].Term
- }
- if s[i].Index != s[j].Index {
- return s[i].Index < s[j].Index
- }
- return s[i].ID < s[j].ID
-}
-
-func (s snapMetaSlice) Swap(i, j int) {
- s[i], s[j] = s[j], s[i]
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/fsm.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/fsm.go
deleted file mode 100644
index c89986c0..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/fsm.go
+++ /dev/null
@@ -1,136 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
- "time"
-
- "github.com/armon/go-metrics"
-)
-
-// FSM provides an interface that can be implemented by
-// clients to make use of the replicated log.
-type FSM interface {
- // Apply log is invoked once a log entry is committed.
- // It returns a value which will be made available in the
- // ApplyFuture returned by Raft.Apply method if that
- // method was called on the same Raft node as the FSM.
- Apply(*Log) interface{}
-
- // Snapshot is used to support log compaction. This call should
- // return an FSMSnapshot which can be used to save a point-in-time
- // snapshot of the FSM. Apply and Snapshot are not called in multiple
- // threads, but Apply will be called concurrently with Persist. This means
- // the FSM should be implemented in a fashion that allows for concurrent
- // updates while a snapshot is happening.
- Snapshot() (FSMSnapshot, error)
-
- // Restore is used to restore an FSM from a snapshot. It is not called
- // concurrently with any other command. The FSM must discard all previous
- // state.
- Restore(io.ReadCloser) error
-}
-
-// FSMSnapshot is returned by an FSM in response to a Snapshot
-// It must be safe to invoke FSMSnapshot methods with concurrent
-// calls to Apply.
-type FSMSnapshot interface {
- // Persist should dump all necessary state to the WriteCloser 'sink',
- // and call sink.Close() when finished or call sink.Cancel() on error.
- Persist(sink SnapshotSink) error
-
- // Release is invoked when we are finished with the snapshot.
- Release()
-}
-
-// runFSM is a long running goroutine responsible for applying logs
-// to the FSM. This is done async of other logs since we don't want
-// the FSM to block our internal operations.
-func (r *Raft) runFSM() {
- var lastIndex, lastTerm uint64
-
- commit := func(req *commitTuple) {
- // Apply the log if a command
- var resp interface{}
- if req.log.Type == LogCommand {
- start := time.Now()
- resp = r.fsm.Apply(req.log)
- metrics.MeasureSince([]string{"raft", "fsm", "apply"}, start)
- }
-
- // Update the indexes
- lastIndex = req.log.Index
- lastTerm = req.log.Term
-
- // Invoke the future if given
- if req.future != nil {
- req.future.response = resp
- req.future.respond(nil)
- }
- }
-
- restore := func(req *restoreFuture) {
- // Open the snapshot
- meta, source, err := r.snapshots.Open(req.ID)
- if err != nil {
- req.respond(fmt.Errorf("failed to open snapshot %v: %v", req.ID, err))
- return
- }
-
- // Attempt to restore
- start := time.Now()
- if err := r.fsm.Restore(source); err != nil {
- req.respond(fmt.Errorf("failed to restore snapshot %v: %v", req.ID, err))
- source.Close()
- return
- }
- source.Close()
- metrics.MeasureSince([]string{"raft", "fsm", "restore"}, start)
-
- // Update the last index and term
- lastIndex = meta.Index
- lastTerm = meta.Term
- req.respond(nil)
- }
-
- snapshot := func(req *reqSnapshotFuture) {
- // Is there something to snapshot?
- if lastIndex == 0 {
- req.respond(ErrNothingNewToSnapshot)
- return
- }
-
- // Start a snapshot
- start := time.Now()
- snap, err := r.fsm.Snapshot()
- metrics.MeasureSince([]string{"raft", "fsm", "snapshot"}, start)
-
- // Respond to the request
- req.index = lastIndex
- req.term = lastTerm
- req.snapshot = snap
- req.respond(err)
- }
-
- for {
- select {
- case ptr := <-r.fsmMutateCh:
- switch req := ptr.(type) {
- case *commitTuple:
- commit(req)
-
- case *restoreFuture:
- restore(req)
-
- default:
- panic(fmt.Errorf("bad type passed to fsmMutateCh: %#v", ptr))
- }
-
- case req := <-r.fsmSnapshotCh:
- snapshot(req)
-
- case <-r.shutdownCh:
- return
- }
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/future.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/future.go
deleted file mode 100644
index fac59a5c..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/future.go
+++ /dev/null
@@ -1,289 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
- "sync"
- "time"
-)
-
-// Future is used to represent an action that may occur in the future.
-type Future interface {
- // Error blocks until the future arrives and then
- // returns the error status of the future.
- // This may be called any number of times - all
- // calls will return the same value.
- // Note that it is not OK to call this method
- // twice concurrently on the same Future instance.
- Error() error
-}
-
-// IndexFuture is used for future actions that can result in a raft log entry
-// being created.
-type IndexFuture interface {
- Future
-
- // Index holds the index of the newly applied log entry.
- // This must not be called until after the Error method has returned.
- Index() uint64
-}
-
-// ApplyFuture is used for Apply and can return the FSM response.
-type ApplyFuture interface {
- IndexFuture
-
- // Response returns the FSM response as returned
- // by the FSM.Apply method. This must not be called
- // until after the Error method has returned.
- Response() interface{}
-}
-
-// ConfigurationFuture is used for GetConfiguration and can return the
-// latest configuration in use by Raft.
-type ConfigurationFuture interface {
- IndexFuture
-
- // Configuration contains the latest configuration. This must
- // not be called until after the Error method has returned.
- Configuration() Configuration
-}
-
-// SnapshotFuture is used for waiting on a user-triggered snapshot to complete.
-type SnapshotFuture interface {
- Future
-
- // Open is a function you can call to access the underlying snapshot and
- // its metadata. This must not be called until after the Error method
- // has returned.
- Open() (*SnapshotMeta, io.ReadCloser, error)
-}
-
-// errorFuture is used to return a static error.
-type errorFuture struct {
- err error
-}
-
-func (e errorFuture) Error() error {
- return e.err
-}
-
-func (e errorFuture) Response() interface{} {
- return nil
-}
-
-func (e errorFuture) Index() uint64 {
- return 0
-}
-
-// deferError can be embedded to allow a future
-// to provide an error in the future.
-type deferError struct {
- err error
- errCh chan error
- responded bool
-}
-
-func (d *deferError) init() {
- d.errCh = make(chan error, 1)
-}
-
-func (d *deferError) Error() error {
- if d.err != nil {
- // Note that when we've received a nil error, this
- // won't trigger, but the channel is closed after
- // send so we'll still return nil below.
- return d.err
- }
- if d.errCh == nil {
- panic("waiting for response on nil channel")
- }
- d.err = <-d.errCh
- return d.err
-}
-
-func (d *deferError) respond(err error) {
- if d.errCh == nil {
- return
- }
- if d.responded {
- return
- }
- d.errCh <- err
- close(d.errCh)
- d.responded = true
-}
-
-// There are several types of requests that cause a configuration entry to
-// be appended to the log. These are encoded here for leaderLoop() to process.
-// This is internal to a single server.
-type configurationChangeFuture struct {
- logFuture
- req configurationChangeRequest
-}
-
-// bootstrapFuture is used to attempt a live bootstrap of the cluster. See the
-// Raft object's BootstrapCluster member function for more details.
-type bootstrapFuture struct {
- deferError
-
- // configuration is the proposed bootstrap configuration to apply.
- configuration Configuration
-}
-
-// logFuture is used to apply a log entry and waits until
-// the log is considered committed.
-type logFuture struct {
- deferError
- log Log
- response interface{}
- dispatch time.Time
-}
-
-func (l *logFuture) Response() interface{} {
- return l.response
-}
-
-func (l *logFuture) Index() uint64 {
- return l.log.Index
-}
-
-type shutdownFuture struct {
- raft *Raft
-}
-
-func (s *shutdownFuture) Error() error {
- if s.raft == nil {
- return nil
- }
- s.raft.waitShutdown()
- if closeable, ok := s.raft.trans.(WithClose); ok {
- closeable.Close()
- }
- return nil
-}
-
-// userSnapshotFuture is used for waiting on a user-triggered snapshot to
-// complete.
-type userSnapshotFuture struct {
- deferError
-
- // opener is a function used to open the snapshot. This is filled in
- // once the future returns with no error.
- opener func() (*SnapshotMeta, io.ReadCloser, error)
-}
-
-// Open is a function you can call to access the underlying snapshot and its
-// metadata.
-func (u *userSnapshotFuture) Open() (*SnapshotMeta, io.ReadCloser, error) {
- if u.opener == nil {
- return nil, nil, fmt.Errorf("no snapshot available")
- } else {
- // Invalidate the opener so it can't get called multiple times,
- // which isn't generally safe.
- defer func() {
- u.opener = nil
- }()
- return u.opener()
- }
-}
-
-// userRestoreFuture is used for waiting on a user-triggered restore of an
-// external snapshot to complete.
-type userRestoreFuture struct {
- deferError
-
- // meta is the metadata that belongs with the snapshot.
- meta *SnapshotMeta
-
- // reader is the interface to read the snapshot contents from.
- reader io.Reader
-}
-
-// reqSnapshotFuture is used for requesting a snapshot start.
-// It is only used internally.
-type reqSnapshotFuture struct {
- deferError
-
- // snapshot details provided by the FSM runner before responding
- index uint64
- term uint64
- snapshot FSMSnapshot
-}
-
-// restoreFuture is used for requesting an FSM to perform a
-// snapshot restore. Used internally only.
-type restoreFuture struct {
- deferError
- ID string
-}
-
-// verifyFuture is used to verify the current node is still
-// the leader. This is to prevent a stale read.
-type verifyFuture struct {
- deferError
- notifyCh chan *verifyFuture
- quorumSize int
- votes int
- voteLock sync.Mutex
-}
-
-// configurationsFuture is used to retrieve the current configurations. This is
-// used to allow safe access to this information outside of the main thread.
-type configurationsFuture struct {
- deferError
- configurations configurations
-}
-
-// Configuration returns the latest configuration in use by Raft.
-func (c *configurationsFuture) Configuration() Configuration {
- return c.configurations.latest
-}
-
-// Index returns the index of the latest configuration in use by Raft.
-func (c *configurationsFuture) Index() uint64 {
- return c.configurations.latestIndex
-}
-
-// vote is used to respond to a verifyFuture.
-// This may block when responding on the notifyCh.
-func (v *verifyFuture) vote(leader bool) {
- v.voteLock.Lock()
- defer v.voteLock.Unlock()
-
- // Guard against having notified already
- if v.notifyCh == nil {
- return
- }
-
- if leader {
- v.votes++
- if v.votes >= v.quorumSize {
- v.notifyCh <- v
- v.notifyCh = nil
- }
- } else {
- v.notifyCh <- v
- v.notifyCh = nil
- }
-}
-
-// appendFuture is used for waiting on a pipelined append
-// entries RPC.
-type appendFuture struct {
- deferError
- start time.Time
- args *AppendEntriesRequest
- resp *AppendEntriesResponse
-}
-
-func (a *appendFuture) Start() time.Time {
- return a.start
-}
-
-func (a *appendFuture) Request() *AppendEntriesRequest {
- return a.args
-}
-
-func (a *appendFuture) Response() *AppendEntriesResponse {
- return a.resp
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_snapshot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_snapshot.go
deleted file mode 100644
index 3aa92b3e..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_snapshot.go
+++ /dev/null
@@ -1,106 +0,0 @@
-package raft
-
-import (
- "bytes"
- "fmt"
- "io"
- "io/ioutil"
- "sync"
-)
-
-// InmemSnapshotStore implements the SnapshotStore interface and
-// retains only the most recent snapshot
-type InmemSnapshotStore struct {
- latest *InmemSnapshotSink
- hasSnapshot bool
- sync.RWMutex
-}
-
-// InmemSnapshotSink implements SnapshotSink in memory
-type InmemSnapshotSink struct {
- meta SnapshotMeta
- contents *bytes.Buffer
-}
-
-// NewInmemSnapshotStore creates a blank new InmemSnapshotStore
-func NewInmemSnapshotStore() *InmemSnapshotStore {
- return &InmemSnapshotStore{
- latest: &InmemSnapshotSink{
- contents: &bytes.Buffer{},
- },
- }
-}
-
-// Create replaces the stored snapshot with a new one using the given args
-func (m *InmemSnapshotStore) Create(version SnapshotVersion, index, term uint64,
- configuration Configuration, configurationIndex uint64, trans Transport) (SnapshotSink, error) {
- // We only support version 1 snapshots at this time.
- if version != 1 {
- return nil, fmt.Errorf("unsupported snapshot version %d", version)
- }
-
- name := snapshotName(term, index)
-
- m.Lock()
- defer m.Unlock()
-
- sink := &InmemSnapshotSink{
- meta: SnapshotMeta{
- Version: version,
- ID: name,
- Index: index,
- Term: term,
- Peers: encodePeers(configuration, trans),
- Configuration: configuration,
- ConfigurationIndex: configurationIndex,
- },
- contents: &bytes.Buffer{},
- }
- m.hasSnapshot = true
- m.latest = sink
-
- return sink, nil
-}
-
-// List returns the latest snapshot taken
-func (m *InmemSnapshotStore) List() ([]*SnapshotMeta, error) {
- m.RLock()
- defer m.RUnlock()
-
- if !m.hasSnapshot {
- return []*SnapshotMeta{}, nil
- }
- return []*SnapshotMeta{&m.latest.meta}, nil
-}
-
-// Open wraps an io.ReadCloser around the snapshot contents
-func (m *InmemSnapshotStore) Open(id string) (*SnapshotMeta, io.ReadCloser, error) {
- m.RLock()
- defer m.RUnlock()
-
- if m.latest.meta.ID != id {
- return nil, nil, fmt.Errorf("[ERR] snapshot: failed to open snapshot id: %s", id)
- }
-
- return &m.latest.meta, ioutil.NopCloser(m.latest.contents), nil
-}
-
-// Write appends the given bytes to the snapshot contents
-func (s *InmemSnapshotSink) Write(p []byte) (n int, err error) {
- written, err := io.Copy(s.contents, bytes.NewReader(p))
- s.meta.Size += written
- return int(written), err
-}
-
-// Close updates the Size and is otherwise a no-op
-func (s *InmemSnapshotSink) Close() error {
- return nil
-}
-
-func (s *InmemSnapshotSink) ID() string {
- return s.meta.ID
-}
-
-func (s *InmemSnapshotSink) Cancel() error {
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_store.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_store.go
deleted file mode 100644
index e5d579e1..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_store.go
+++ /dev/null
@@ -1,125 +0,0 @@
-package raft
-
-import (
- "sync"
-)
-
-// InmemStore implements the LogStore and StableStore interface.
-// It should NOT EVER be used for production. It is used only for
-// unit tests. Use the MDBStore implementation instead.
-type InmemStore struct {
- l sync.RWMutex
- lowIndex uint64
- highIndex uint64
- logs map[uint64]*Log
- kv map[string][]byte
- kvInt map[string]uint64
-}
-
-// NewInmemStore returns a new in-memory backend. Do not ever
-// use for production. Only for testing.
-func NewInmemStore() *InmemStore {
- i := &InmemStore{
- logs: make(map[uint64]*Log),
- kv: make(map[string][]byte),
- kvInt: make(map[string]uint64),
- }
- return i
-}
-
-// FirstIndex implements the LogStore interface.
-func (i *InmemStore) FirstIndex() (uint64, error) {
- i.l.RLock()
- defer i.l.RUnlock()
- return i.lowIndex, nil
-}
-
-// LastIndex implements the LogStore interface.
-func (i *InmemStore) LastIndex() (uint64, error) {
- i.l.RLock()
- defer i.l.RUnlock()
- return i.highIndex, nil
-}
-
-// GetLog implements the LogStore interface.
-func (i *InmemStore) GetLog(index uint64, log *Log) error {
- i.l.RLock()
- defer i.l.RUnlock()
- l, ok := i.logs[index]
- if !ok {
- return ErrLogNotFound
- }
- *log = *l
- return nil
-}
-
-// StoreLog implements the LogStore interface.
-func (i *InmemStore) StoreLog(log *Log) error {
- return i.StoreLogs([]*Log{log})
-}
-
-// StoreLogs implements the LogStore interface.
-func (i *InmemStore) StoreLogs(logs []*Log) error {
- i.l.Lock()
- defer i.l.Unlock()
- for _, l := range logs {
- i.logs[l.Index] = l
- if i.lowIndex == 0 {
- i.lowIndex = l.Index
- }
- if l.Index > i.highIndex {
- i.highIndex = l.Index
- }
- }
- return nil
-}
-
-// DeleteRange implements the LogStore interface.
-func (i *InmemStore) DeleteRange(min, max uint64) error {
- i.l.Lock()
- defer i.l.Unlock()
- for j := min; j <= max; j++ {
- delete(i.logs, j)
- }
- if min <= i.lowIndex {
- i.lowIndex = max + 1
- }
- if max >= i.highIndex {
- i.highIndex = min - 1
- }
- if i.lowIndex > i.highIndex {
- i.lowIndex = 0
- i.highIndex = 0
- }
- return nil
-}
-
-// Set implements the StableStore interface.
-func (i *InmemStore) Set(key []byte, val []byte) error {
- i.l.Lock()
- defer i.l.Unlock()
- i.kv[string(key)] = val
- return nil
-}
-
-// Get implements the StableStore interface.
-func (i *InmemStore) Get(key []byte) ([]byte, error) {
- i.l.RLock()
- defer i.l.RUnlock()
- return i.kv[string(key)], nil
-}
-
-// SetUint64 implements the StableStore interface.
-func (i *InmemStore) SetUint64(key []byte, val uint64) error {
- i.l.Lock()
- defer i.l.Unlock()
- i.kvInt[string(key)] = val
- return nil
-}
-
-// GetUint64 implements the StableStore interface.
-func (i *InmemStore) GetUint64(key []byte) (uint64, error) {
- i.l.RLock()
- defer i.l.RUnlock()
- return i.kvInt[string(key)], nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_transport.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_transport.go
deleted file mode 100644
index 3693cd5a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/inmem_transport.go
+++ /dev/null
@@ -1,322 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
- "sync"
- "time"
-)
-
-// NewInmemAddr returns a new in-memory addr with
-// a randomly generate UUID as the ID.
-func NewInmemAddr() ServerAddress {
- return ServerAddress(generateUUID())
-}
-
-// inmemPipeline is used to pipeline requests for the in-mem transport.
-type inmemPipeline struct {
- trans *InmemTransport
- peer *InmemTransport
- peerAddr ServerAddress
-
- doneCh chan AppendFuture
- inprogressCh chan *inmemPipelineInflight
-
- shutdown bool
- shutdownCh chan struct{}
- shutdownLock sync.Mutex
-}
-
-type inmemPipelineInflight struct {
- future *appendFuture
- respCh <-chan RPCResponse
-}
-
-// InmemTransport Implements the Transport interface, to allow Raft to be
-// tested in-memory without going over a network.
-type InmemTransport struct {
- sync.RWMutex
- consumerCh chan RPC
- localAddr ServerAddress
- peers map[ServerAddress]*InmemTransport
- pipelines []*inmemPipeline
- timeout time.Duration
-}
-
-// NewInmemTransport is used to initialize a new transport
-// and generates a random local address if none is specified
-func NewInmemTransport(addr ServerAddress) (ServerAddress, *InmemTransport) {
- if string(addr) == "" {
- addr = NewInmemAddr()
- }
- trans := &InmemTransport{
- consumerCh: make(chan RPC, 16),
- localAddr: addr,
- peers: make(map[ServerAddress]*InmemTransport),
- timeout: 50 * time.Millisecond,
- }
- return addr, trans
-}
-
-// SetHeartbeatHandler is used to set optional fast-path for
-// heartbeats, not supported for this transport.
-func (i *InmemTransport) SetHeartbeatHandler(cb func(RPC)) {
-}
-
-// Consumer implements the Transport interface.
-func (i *InmemTransport) Consumer() <-chan RPC {
- return i.consumerCh
-}
-
-// LocalAddr implements the Transport interface.
-func (i *InmemTransport) LocalAddr() ServerAddress {
- return i.localAddr
-}
-
-// AppendEntriesPipeline returns an interface that can be used to pipeline
-// AppendEntries requests.
-func (i *InmemTransport) AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error) {
- i.RLock()
- peer, ok := i.peers[target]
- i.RUnlock()
- if !ok {
- return nil, fmt.Errorf("failed to connect to peer: %v", target)
- }
- pipeline := newInmemPipeline(i, peer, target)
- i.Lock()
- i.pipelines = append(i.pipelines, pipeline)
- i.Unlock()
- return pipeline, nil
-}
-
-// AppendEntries implements the Transport interface.
-func (i *InmemTransport) AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error {
- rpcResp, err := i.makeRPC(target, args, nil, i.timeout)
- if err != nil {
- return err
- }
-
- // Copy the result back
- out := rpcResp.Response.(*AppendEntriesResponse)
- *resp = *out
- return nil
-}
-
-// RequestVote implements the Transport interface.
-func (i *InmemTransport) RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error {
- rpcResp, err := i.makeRPC(target, args, nil, i.timeout)
- if err != nil {
- return err
- }
-
- // Copy the result back
- out := rpcResp.Response.(*RequestVoteResponse)
- *resp = *out
- return nil
-}
-
-// InstallSnapshot implements the Transport interface.
-func (i *InmemTransport) InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error {
- rpcResp, err := i.makeRPC(target, args, data, 10*i.timeout)
- if err != nil {
- return err
- }
-
- // Copy the result back
- out := rpcResp.Response.(*InstallSnapshotResponse)
- *resp = *out
- return nil
-}
-
-func (i *InmemTransport) makeRPC(target ServerAddress, args interface{}, r io.Reader, timeout time.Duration) (rpcResp RPCResponse, err error) {
- i.RLock()
- peer, ok := i.peers[target]
- i.RUnlock()
-
- if !ok {
- err = fmt.Errorf("failed to connect to peer: %v", target)
- return
- }
-
- // Send the RPC over
- respCh := make(chan RPCResponse)
- peer.consumerCh <- RPC{
- Command: args,
- Reader: r,
- RespChan: respCh,
- }
-
- // Wait for a response
- select {
- case rpcResp = <-respCh:
- if rpcResp.Error != nil {
- err = rpcResp.Error
- }
- case <-time.After(timeout):
- err = fmt.Errorf("command timed out")
- }
- return
-}
-
-// EncodePeer implements the Transport interface.
-func (i *InmemTransport) EncodePeer(p ServerAddress) []byte {
- return []byte(p)
-}
-
-// DecodePeer implements the Transport interface.
-func (i *InmemTransport) DecodePeer(buf []byte) ServerAddress {
- return ServerAddress(buf)
-}
-
-// Connect is used to connect this transport to another transport for
-// a given peer name. This allows for local routing.
-func (i *InmemTransport) Connect(peer ServerAddress, t Transport) {
- trans := t.(*InmemTransport)
- i.Lock()
- defer i.Unlock()
- i.peers[peer] = trans
-}
-
-// Disconnect is used to remove the ability to route to a given peer.
-func (i *InmemTransport) Disconnect(peer ServerAddress) {
- i.Lock()
- defer i.Unlock()
- delete(i.peers, peer)
-
- // Disconnect any pipelines
- n := len(i.pipelines)
- for idx := 0; idx < n; idx++ {
- if i.pipelines[idx].peerAddr == peer {
- i.pipelines[idx].Close()
- i.pipelines[idx], i.pipelines[n-1] = i.pipelines[n-1], nil
- idx--
- n--
- }
- }
- i.pipelines = i.pipelines[:n]
-}
-
-// DisconnectAll is used to remove all routes to peers.
-func (i *InmemTransport) DisconnectAll() {
- i.Lock()
- defer i.Unlock()
- i.peers = make(map[ServerAddress]*InmemTransport)
-
- // Handle pipelines
- for _, pipeline := range i.pipelines {
- pipeline.Close()
- }
- i.pipelines = nil
-}
-
-// Close is used to permanently disable the transport
-func (i *InmemTransport) Close() error {
- i.DisconnectAll()
- return nil
-}
-
-func newInmemPipeline(trans *InmemTransport, peer *InmemTransport, addr ServerAddress) *inmemPipeline {
- i := &inmemPipeline{
- trans: trans,
- peer: peer,
- peerAddr: addr,
- doneCh: make(chan AppendFuture, 16),
- inprogressCh: make(chan *inmemPipelineInflight, 16),
- shutdownCh: make(chan struct{}),
- }
- go i.decodeResponses()
- return i
-}
-
-func (i *inmemPipeline) decodeResponses() {
- timeout := i.trans.timeout
- for {
- select {
- case inp := <-i.inprogressCh:
- var timeoutCh <-chan time.Time
- if timeout > 0 {
- timeoutCh = time.After(timeout)
- }
-
- select {
- case rpcResp := <-inp.respCh:
- // Copy the result back
- *inp.future.resp = *rpcResp.Response.(*AppendEntriesResponse)
- inp.future.respond(rpcResp.Error)
-
- select {
- case i.doneCh <- inp.future:
- case <-i.shutdownCh:
- return
- }
-
- case <-timeoutCh:
- inp.future.respond(fmt.Errorf("command timed out"))
- select {
- case i.doneCh <- inp.future:
- case <-i.shutdownCh:
- return
- }
-
- case <-i.shutdownCh:
- return
- }
- case <-i.shutdownCh:
- return
- }
- }
-}
-
-func (i *inmemPipeline) AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error) {
- // Create a new future
- future := &appendFuture{
- start: time.Now(),
- args: args,
- resp: resp,
- }
- future.init()
-
- // Handle a timeout
- var timeout <-chan time.Time
- if i.trans.timeout > 0 {
- timeout = time.After(i.trans.timeout)
- }
-
- // Send the RPC over
- respCh := make(chan RPCResponse, 1)
- rpc := RPC{
- Command: args,
- RespChan: respCh,
- }
- select {
- case i.peer.consumerCh <- rpc:
- case <-timeout:
- return nil, fmt.Errorf("command enqueue timeout")
- case <-i.shutdownCh:
- return nil, ErrPipelineShutdown
- }
-
- // Send to be decoded
- select {
- case i.inprogressCh <- &inmemPipelineInflight{future, respCh}:
- return future, nil
- case <-i.shutdownCh:
- return nil, ErrPipelineShutdown
- }
-}
-
-func (i *inmemPipeline) Consumer() <-chan AppendFuture {
- return i.doneCh
-}
-
-func (i *inmemPipeline) Close() error {
- i.shutdownLock.Lock()
- defer i.shutdownLock.Unlock()
- if i.shutdown {
- return nil
- }
-
- i.shutdown = true
- close(i.shutdownCh)
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log.go
deleted file mode 100644
index 4ade38ec..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log.go
+++ /dev/null
@@ -1,72 +0,0 @@
-package raft
-
-// LogType describes various types of log entries.
-type LogType uint8
-
-const (
- // LogCommand is applied to a user FSM.
- LogCommand LogType = iota
-
- // LogNoop is used to assert leadership.
- LogNoop
-
- // LogAddPeer is used to add a new peer. This should only be used with
- // older protocol versions designed to be compatible with unversioned
- // Raft servers. See comments in config.go for details.
- LogAddPeerDeprecated
-
- // LogRemovePeer is used to remove an existing peer. This should only be
- // used with older protocol versions designed to be compatible with
- // unversioned Raft servers. See comments in config.go for details.
- LogRemovePeerDeprecated
-
- // LogBarrier is used to ensure all preceding operations have been
- // applied to the FSM. It is similar to LogNoop, but instead of returning
- // once committed, it only returns once the FSM manager acks it. Otherwise
- // it is possible there are operations committed but not yet applied to
- // the FSM.
- LogBarrier
-
- // LogConfiguration establishes a membership change configuration. It is
- // created when a server is added, removed, promoted, etc. Only used
- // when protocol version 1 or greater is in use.
- LogConfiguration
-)
-
-// Log entries are replicated to all members of the Raft cluster
-// and form the heart of the replicated state machine.
-type Log struct {
- // Index holds the index of the log entry.
- Index uint64
-
- // Term holds the election term of the log entry.
- Term uint64
-
- // Type holds the type of the log entry.
- Type LogType
-
- // Data holds the log entry's type-specific data.
- Data []byte
-}
-
-// LogStore is used to provide an interface for storing
-// and retrieving logs in a durable fashion.
-type LogStore interface {
- // FirstIndex returns the first index written. 0 for no entries.
- FirstIndex() (uint64, error)
-
- // LastIndex returns the last index written. 0 for no entries.
- LastIndex() (uint64, error)
-
- // GetLog gets a log entry at a given index.
- GetLog(index uint64, log *Log) error
-
- // StoreLog stores a log entry.
- StoreLog(log *Log) error
-
- // StoreLogs stores multiple log entries.
- StoreLogs(logs []*Log) error
-
- // DeleteRange deletes a range of log entries. The range is inclusive.
- DeleteRange(min, max uint64) error
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log_cache.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log_cache.go
deleted file mode 100644
index 952e98c2..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/log_cache.go
+++ /dev/null
@@ -1,79 +0,0 @@
-package raft
-
-import (
- "fmt"
- "sync"
-)
-
-// LogCache wraps any LogStore implementation to provide an
-// in-memory ring buffer. This is used to cache access to
-// the recently written entries. For implementations that do not
-// cache themselves, this can provide a substantial boost by
-// avoiding disk I/O on recent entries.
-type LogCache struct {
- store LogStore
-
- cache []*Log
- l sync.RWMutex
-}
-
-// NewLogCache is used to create a new LogCache with the
-// given capacity and backend store.
-func NewLogCache(capacity int, store LogStore) (*LogCache, error) {
- if capacity <= 0 {
- return nil, fmt.Errorf("capacity must be positive")
- }
- c := &LogCache{
- store: store,
- cache: make([]*Log, capacity),
- }
- return c, nil
-}
-
-func (c *LogCache) GetLog(idx uint64, log *Log) error {
- // Check the buffer for an entry
- c.l.RLock()
- cached := c.cache[idx%uint64(len(c.cache))]
- c.l.RUnlock()
-
- // Check if entry is valid
- if cached != nil && cached.Index == idx {
- *log = *cached
- return nil
- }
-
- // Forward request on cache miss
- return c.store.GetLog(idx, log)
-}
-
-func (c *LogCache) StoreLog(log *Log) error {
- return c.StoreLogs([]*Log{log})
-}
-
-func (c *LogCache) StoreLogs(logs []*Log) error {
- // Insert the logs into the ring buffer
- c.l.Lock()
- for _, l := range logs {
- c.cache[l.Index%uint64(len(c.cache))] = l
- }
- c.l.Unlock()
-
- return c.store.StoreLogs(logs)
-}
-
-func (c *LogCache) FirstIndex() (uint64, error) {
- return c.store.FirstIndex()
-}
-
-func (c *LogCache) LastIndex() (uint64, error) {
- return c.store.LastIndex()
-}
-
-func (c *LogCache) DeleteRange(min, max uint64) error {
- // Invalidate the cache on deletes
- c.l.Lock()
- c.cache = make([]*Log, len(c.cache))
- c.l.Unlock()
-
- return c.store.DeleteRange(min, max)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/membership.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/membership.md
deleted file mode 100644
index df1f83e2..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/membership.md
+++ /dev/null
@@ -1,83 +0,0 @@
-Simon (@superfell) and I (@ongardie) talked through reworking this library's cluster membership changes last Friday. We don't see a way to split this into independent patches, so we're taking the next best approach: submitting the plan here for review, then working on an enormous PR. Your feedback would be appreciated. (@superfell is out this week, however, so don't expect him to respond quickly.)
-
-These are the main goals:
- - Bringing things in line with the description in my PhD dissertation;
- - Catching up new servers prior to granting them a vote, as well as allowing permanent non-voting members; and
- - Eliminating the `peers.json` file, to avoid issues of consistency between that and the log/snapshot.
-
-## Data-centric view
-
-We propose to re-define a *configuration* as a set of servers, where each server includes an address (as it does today) and a mode that is either:
- - *Voter*: a server whose vote is counted in elections and whose match index is used in advancing the leader's commit index.
- - *Nonvoter*: a server that receives log entries but is not considered for elections or commitment purposes.
- - *Staging*: a server that acts like a nonvoter with one exception: once a staging server receives enough log entries to catch up sufficiently to the leader's log, the leader will invoke a membership change to change the staging server to a voter.
-
-All changes to the configuration will be done by writing a new configuration to the log. The new configuration will be in affect as soon as it is appended to the log (not when it is committed like a normal state machine command). Note that, per my dissertation, there can be at most one uncommitted configuration at a time (the next configuration may not be created until the prior one has been committed). It's not strictly necessary to follow these same rules for the nonvoter/staging servers, but we think its best to treat all changes uniformly.
-
-Each server will track two configurations:
- 1. its *committed configuration*: the latest configuration in the log/snapshot that has been committed, along with its index.
- 2. its *latest configuration*: the latest configuration in the log/snapshot (may be committed or uncommitted), along with its index.
-
-When there's no membership change happening, these two will be the same. The latest configuration is almost always the one used, except:
- - When followers truncate the suffix of their logs, they may need to fall back to the committed configuration.
- - When snapshotting, the committed configuration is written, to correspond with the committed log prefix that is being snapshotted.
-
-
-## Application API
-
-We propose the following operations for clients to manipulate the cluster configuration:
- - AddVoter: server becomes staging unless voter,
- - AddNonvoter: server becomes nonvoter unless staging or voter,
- - DemoteVoter: server becomes nonvoter unless absent,
- - RemovePeer: server removed from configuration,
- - GetConfiguration: waits for latest config to commit, returns committed config.
-
-This diagram, of which I'm quite proud, shows the possible transitions:
-```
-+-----------------------------------------------------------------------------+
-| |
-| Start -> +--------+ |
-| ,------<------------| | |
-| / | absent | |
-| / RemovePeer--> | | <---RemovePeer |
-| / | +--------+ \ |
-| / | | \ |
-| AddNonvoter | AddVoter \ |
-| | ,->---' `--<-. | \ |
-| v / \ v \ |
-| +----------+ +----------+ +----------+ |
-| | | ---AddVoter--> | | -log caught up --> | | |
-| | nonvoter | | staging | | voter | |
-| | | <-DemoteVoter- | | ,- | | |
-| +----------+ \ +----------+ / +----------+ |
-| \ / |
-| `--------------<---------------' |
-| |
-+-----------------------------------------------------------------------------+
-```
-
-While these operations aren't quite symmetric, we think they're a good set to capture
-the possible intent of the user. For example, if I want to make sure a server doesn't have a vote, but the server isn't part of the configuration at all, it probably shouldn't be added as a nonvoting server.
-
-Each of these application-level operations will be interpreted by the leader and, if it has an effect, will cause the leader to write a new configuration entry to its log. Which particular application-level operation caused the log entry to be written need not be part of the log entry.
-
-## Code implications
-
-This is a non-exhaustive list, but we came up with a few things:
-- Remove the PeerStore: the `peers.json` file introduces the possibility of getting out of sync with the log and snapshot, and it's hard to maintain this atomically as the log changes. It's not clear whether it's meant to track the committed or latest configuration, either.
-- Servers will have to search their snapshot and log to find the committed configuration and the latest configuration on startup.
-- Bootstrap will no longer use `peers.json` but should initialize the log or snapshot with an application-provided configuration entry.
-- Snapshots should store the index of their configuration along with the configuration itself. In my experience with LogCabin, the original log index of the configuration is very useful to include in debug log messages.
-- As noted in hashicorp/raft#84, configuration change requests should come in via a separate channel, and one may not proceed until the last has been committed.
-- As to deciding when a log is sufficiently caught up, implementing a sophisticated algorithm *is* something that can be done in a separate PR. An easy and decent placeholder is: once the staging server has reached 95% of the leader's commit index, promote it.
-
-## Feedback
-
-Again, we're looking for feedback here before we start working on this. Here are some questions to think about:
- - Does this seem like where we want things to go?
- - Is there anything here that should be left out?
- - Is there anything else we're forgetting about?
- - Is there a good way to break this up?
- - What do we need to worry about in terms of backwards compatibility?
- - What implication will this have on current tests?
- - What's the best way to test this code, in particular the small changes that will be sprinkled all over the library?
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/net_transport.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/net_transport.go
deleted file mode 100644
index 7c55ac53..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/net_transport.go
+++ /dev/null
@@ -1,622 +0,0 @@
-package raft
-
-import (
- "bufio"
- "errors"
- "fmt"
- "io"
- "log"
- "net"
- "os"
- "sync"
- "time"
-
- "github.com/hashicorp/go-msgpack/codec"
-)
-
-const (
- rpcAppendEntries uint8 = iota
- rpcRequestVote
- rpcInstallSnapshot
-
- // DefaultTimeoutScale is the default TimeoutScale in a NetworkTransport.
- DefaultTimeoutScale = 256 * 1024 // 256KB
-
- // rpcMaxPipeline controls the maximum number of outstanding
- // AppendEntries RPC calls.
- rpcMaxPipeline = 128
-)
-
-var (
- // ErrTransportShutdown is returned when operations on a transport are
- // invoked after it's been terminated.
- ErrTransportShutdown = errors.New("transport shutdown")
-
- // ErrPipelineShutdown is returned when the pipeline is closed.
- ErrPipelineShutdown = errors.New("append pipeline closed")
-)
-
-/*
-
-NetworkTransport provides a network based transport that can be
-used to communicate with Raft on remote machines. It requires
-an underlying stream layer to provide a stream abstraction, which can
-be simple TCP, TLS, etc.
-
-This transport is very simple and lightweight. Each RPC request is
-framed by sending a byte that indicates the message type, followed
-by the MsgPack encoded request.
-
-The response is an error string followed by the response object,
-both are encoded using MsgPack.
-
-InstallSnapshot is special, in that after the RPC request we stream
-the entire state. That socket is not re-used as the connection state
-is not known if there is an error.
-
-*/
-type NetworkTransport struct {
- connPool map[ServerAddress][]*netConn
- connPoolLock sync.Mutex
-
- consumeCh chan RPC
-
- heartbeatFn func(RPC)
- heartbeatFnLock sync.Mutex
-
- logger *log.Logger
-
- maxPool int
-
- shutdown bool
- shutdownCh chan struct{}
- shutdownLock sync.Mutex
-
- stream StreamLayer
-
- timeout time.Duration
- TimeoutScale int
-}
-
-// StreamLayer is used with the NetworkTransport to provide
-// the low level stream abstraction.
-type StreamLayer interface {
- net.Listener
-
- // Dial is used to create a new outgoing connection
- Dial(address ServerAddress, timeout time.Duration) (net.Conn, error)
-}
-
-type netConn struct {
- target ServerAddress
- conn net.Conn
- r *bufio.Reader
- w *bufio.Writer
- dec *codec.Decoder
- enc *codec.Encoder
-}
-
-func (n *netConn) Release() error {
- return n.conn.Close()
-}
-
-type netPipeline struct {
- conn *netConn
- trans *NetworkTransport
-
- doneCh chan AppendFuture
- inprogressCh chan *appendFuture
-
- shutdown bool
- shutdownCh chan struct{}
- shutdownLock sync.Mutex
-}
-
-// NewNetworkTransport creates a new network transport with the given dialer
-// and listener. The maxPool controls how many connections we will pool. The
-// timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply
-// the timeout by (SnapshotSize / TimeoutScale).
-func NewNetworkTransport(
- stream StreamLayer,
- maxPool int,
- timeout time.Duration,
- logOutput io.Writer,
-) *NetworkTransport {
- if logOutput == nil {
- logOutput = os.Stderr
- }
- return NewNetworkTransportWithLogger(stream, maxPool, timeout, log.New(logOutput, "", log.LstdFlags))
-}
-
-// NewNetworkTransportWithLogger creates a new network transport with the given dialer
-// and listener. The maxPool controls how many connections we will pool. The
-// timeout is used to apply I/O deadlines. For InstallSnapshot, we multiply
-// the timeout by (SnapshotSize / TimeoutScale).
-func NewNetworkTransportWithLogger(
- stream StreamLayer,
- maxPool int,
- timeout time.Duration,
- logger *log.Logger,
-) *NetworkTransport {
- if logger == nil {
- logger = log.New(os.Stderr, "", log.LstdFlags)
- }
- trans := &NetworkTransport{
- connPool: make(map[ServerAddress][]*netConn),
- consumeCh: make(chan RPC),
- logger: logger,
- maxPool: maxPool,
- shutdownCh: make(chan struct{}),
- stream: stream,
- timeout: timeout,
- TimeoutScale: DefaultTimeoutScale,
- }
- go trans.listen()
- return trans
-}
-
-// SetHeartbeatHandler is used to setup a heartbeat handler
-// as a fast-pass. This is to avoid head-of-line blocking from
-// disk IO.
-func (n *NetworkTransport) SetHeartbeatHandler(cb func(rpc RPC)) {
- n.heartbeatFnLock.Lock()
- defer n.heartbeatFnLock.Unlock()
- n.heartbeatFn = cb
-}
-
-// Close is used to stop the network transport.
-func (n *NetworkTransport) Close() error {
- n.shutdownLock.Lock()
- defer n.shutdownLock.Unlock()
-
- if !n.shutdown {
- close(n.shutdownCh)
- n.stream.Close()
- n.shutdown = true
- }
- return nil
-}
-
-// Consumer implements the Transport interface.
-func (n *NetworkTransport) Consumer() <-chan RPC {
- return n.consumeCh
-}
-
-// LocalAddr implements the Transport interface.
-func (n *NetworkTransport) LocalAddr() ServerAddress {
- return ServerAddress(n.stream.Addr().String())
-}
-
-// IsShutdown is used to check if the transport is shutdown.
-func (n *NetworkTransport) IsShutdown() bool {
- select {
- case <-n.shutdownCh:
- return true
- default:
- return false
- }
-}
-
-// getExistingConn is used to grab a pooled connection.
-func (n *NetworkTransport) getPooledConn(target ServerAddress) *netConn {
- n.connPoolLock.Lock()
- defer n.connPoolLock.Unlock()
-
- conns, ok := n.connPool[target]
- if !ok || len(conns) == 0 {
- return nil
- }
-
- var conn *netConn
- num := len(conns)
- conn, conns[num-1] = conns[num-1], nil
- n.connPool[target] = conns[:num-1]
- return conn
-}
-
-// getConn is used to get a connection from the pool.
-func (n *NetworkTransport) getConn(target ServerAddress) (*netConn, error) {
- // Check for a pooled conn
- if conn := n.getPooledConn(target); conn != nil {
- return conn, nil
- }
-
- // Dial a new connection
- conn, err := n.stream.Dial(target, n.timeout)
- if err != nil {
- return nil, err
- }
-
- // Wrap the conn
- netConn := &netConn{
- target: target,
- conn: conn,
- r: bufio.NewReader(conn),
- w: bufio.NewWriter(conn),
- }
-
- // Setup encoder/decoders
- netConn.dec = codec.NewDecoder(netConn.r, &codec.MsgpackHandle{})
- netConn.enc = codec.NewEncoder(netConn.w, &codec.MsgpackHandle{})
-
- // Done
- return netConn, nil
-}
-
-// returnConn returns a connection back to the pool.
-func (n *NetworkTransport) returnConn(conn *netConn) {
- n.connPoolLock.Lock()
- defer n.connPoolLock.Unlock()
-
- key := conn.target
- conns, _ := n.connPool[key]
-
- if !n.IsShutdown() && len(conns) < n.maxPool {
- n.connPool[key] = append(conns, conn)
- } else {
- conn.Release()
- }
-}
-
-// AppendEntriesPipeline returns an interface that can be used to pipeline
-// AppendEntries requests.
-func (n *NetworkTransport) AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error) {
- // Get a connection
- conn, err := n.getConn(target)
- if err != nil {
- return nil, err
- }
-
- // Create the pipeline
- return newNetPipeline(n, conn), nil
-}
-
-// AppendEntries implements the Transport interface.
-func (n *NetworkTransport) AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error {
- return n.genericRPC(target, rpcAppendEntries, args, resp)
-}
-
-// RequestVote implements the Transport interface.
-func (n *NetworkTransport) RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error {
- return n.genericRPC(target, rpcRequestVote, args, resp)
-}
-
-// genericRPC handles a simple request/response RPC.
-func (n *NetworkTransport) genericRPC(target ServerAddress, rpcType uint8, args interface{}, resp interface{}) error {
- // Get a conn
- conn, err := n.getConn(target)
- if err != nil {
- return err
- }
-
- // Set a deadline
- if n.timeout > 0 {
- conn.conn.SetDeadline(time.Now().Add(n.timeout))
- }
-
- // Send the RPC
- if err = sendRPC(conn, rpcType, args); err != nil {
- return err
- }
-
- // Decode the response
- canReturn, err := decodeResponse(conn, resp)
- if canReturn {
- n.returnConn(conn)
- }
- return err
-}
-
-// InstallSnapshot implements the Transport interface.
-func (n *NetworkTransport) InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error {
- // Get a conn, always close for InstallSnapshot
- conn, err := n.getConn(target)
- if err != nil {
- return err
- }
- defer conn.Release()
-
- // Set a deadline, scaled by request size
- if n.timeout > 0 {
- timeout := n.timeout * time.Duration(args.Size/int64(n.TimeoutScale))
- if timeout < n.timeout {
- timeout = n.timeout
- }
- conn.conn.SetDeadline(time.Now().Add(timeout))
- }
-
- // Send the RPC
- if err = sendRPC(conn, rpcInstallSnapshot, args); err != nil {
- return err
- }
-
- // Stream the state
- if _, err = io.Copy(conn.w, data); err != nil {
- return err
- }
-
- // Flush
- if err = conn.w.Flush(); err != nil {
- return err
- }
-
- // Decode the response, do not return conn
- _, err = decodeResponse(conn, resp)
- return err
-}
-
-// EncodePeer implements the Transport interface.
-func (n *NetworkTransport) EncodePeer(p ServerAddress) []byte {
- return []byte(p)
-}
-
-// DecodePeer implements the Transport interface.
-func (n *NetworkTransport) DecodePeer(buf []byte) ServerAddress {
- return ServerAddress(buf)
-}
-
-// listen is used to handling incoming connections.
-func (n *NetworkTransport) listen() {
- for {
- // Accept incoming connections
- conn, err := n.stream.Accept()
- if err != nil {
- if n.IsShutdown() {
- return
- }
- n.logger.Printf("[ERR] raft-net: Failed to accept connection: %v", err)
- continue
- }
- n.logger.Printf("[DEBUG] raft-net: %v accepted connection from: %v", n.LocalAddr(), conn.RemoteAddr())
-
- // Handle the connection in dedicated routine
- go n.handleConn(conn)
- }
-}
-
-// handleConn is used to handle an inbound connection for its lifespan.
-func (n *NetworkTransport) handleConn(conn net.Conn) {
- defer conn.Close()
- r := bufio.NewReader(conn)
- w := bufio.NewWriter(conn)
- dec := codec.NewDecoder(r, &codec.MsgpackHandle{})
- enc := codec.NewEncoder(w, &codec.MsgpackHandle{})
-
- for {
- if err := n.handleCommand(r, dec, enc); err != nil {
- if err != io.EOF {
- n.logger.Printf("[ERR] raft-net: Failed to decode incoming command: %v", err)
- }
- return
- }
- if err := w.Flush(); err != nil {
- n.logger.Printf("[ERR] raft-net: Failed to flush response: %v", err)
- return
- }
- }
-}
-
-// handleCommand is used to decode and dispatch a single command.
-func (n *NetworkTransport) handleCommand(r *bufio.Reader, dec *codec.Decoder, enc *codec.Encoder) error {
- // Get the rpc type
- rpcType, err := r.ReadByte()
- if err != nil {
- return err
- }
-
- // Create the RPC object
- respCh := make(chan RPCResponse, 1)
- rpc := RPC{
- RespChan: respCh,
- }
-
- // Decode the command
- isHeartbeat := false
- switch rpcType {
- case rpcAppendEntries:
- var req AppendEntriesRequest
- if err := dec.Decode(&req); err != nil {
- return err
- }
- rpc.Command = &req
-
- // Check if this is a heartbeat
- if req.Term != 0 && req.Leader != nil &&
- req.PrevLogEntry == 0 && req.PrevLogTerm == 0 &&
- len(req.Entries) == 0 && req.LeaderCommitIndex == 0 {
- isHeartbeat = true
- }
-
- case rpcRequestVote:
- var req RequestVoteRequest
- if err := dec.Decode(&req); err != nil {
- return err
- }
- rpc.Command = &req
-
- case rpcInstallSnapshot:
- var req InstallSnapshotRequest
- if err := dec.Decode(&req); err != nil {
- return err
- }
- rpc.Command = &req
- rpc.Reader = io.LimitReader(r, req.Size)
-
- default:
- return fmt.Errorf("unknown rpc type %d", rpcType)
- }
-
- // Check for heartbeat fast-path
- if isHeartbeat {
- n.heartbeatFnLock.Lock()
- fn := n.heartbeatFn
- n.heartbeatFnLock.Unlock()
- if fn != nil {
- fn(rpc)
- goto RESP
- }
- }
-
- // Dispatch the RPC
- select {
- case n.consumeCh <- rpc:
- case <-n.shutdownCh:
- return ErrTransportShutdown
- }
-
- // Wait for response
-RESP:
- select {
- case resp := <-respCh:
- // Send the error first
- respErr := ""
- if resp.Error != nil {
- respErr = resp.Error.Error()
- }
- if err := enc.Encode(respErr); err != nil {
- return err
- }
-
- // Send the response
- if err := enc.Encode(resp.Response); err != nil {
- return err
- }
- case <-n.shutdownCh:
- return ErrTransportShutdown
- }
- return nil
-}
-
-// decodeResponse is used to decode an RPC response and reports whether
-// the connection can be reused.
-func decodeResponse(conn *netConn, resp interface{}) (bool, error) {
- // Decode the error if any
- var rpcError string
- if err := conn.dec.Decode(&rpcError); err != nil {
- conn.Release()
- return false, err
- }
-
- // Decode the response
- if err := conn.dec.Decode(resp); err != nil {
- conn.Release()
- return false, err
- }
-
- // Format an error if any
- if rpcError != "" {
- return true, fmt.Errorf(rpcError)
- }
- return true, nil
-}
-
-// sendRPC is used to encode and send the RPC.
-func sendRPC(conn *netConn, rpcType uint8, args interface{}) error {
- // Write the request type
- if err := conn.w.WriteByte(rpcType); err != nil {
- conn.Release()
- return err
- }
-
- // Send the request
- if err := conn.enc.Encode(args); err != nil {
- conn.Release()
- return err
- }
-
- // Flush
- if err := conn.w.Flush(); err != nil {
- conn.Release()
- return err
- }
- return nil
-}
-
-// newNetPipeline is used to construct a netPipeline from a given
-// transport and connection.
-func newNetPipeline(trans *NetworkTransport, conn *netConn) *netPipeline {
- n := &netPipeline{
- conn: conn,
- trans: trans,
- doneCh: make(chan AppendFuture, rpcMaxPipeline),
- inprogressCh: make(chan *appendFuture, rpcMaxPipeline),
- shutdownCh: make(chan struct{}),
- }
- go n.decodeResponses()
- return n
-}
-
-// decodeResponses is a long running routine that decodes the responses
-// sent on the connection.
-func (n *netPipeline) decodeResponses() {
- timeout := n.trans.timeout
- for {
- select {
- case future := <-n.inprogressCh:
- if timeout > 0 {
- n.conn.conn.SetReadDeadline(time.Now().Add(timeout))
- }
-
- _, err := decodeResponse(n.conn, future.resp)
- future.respond(err)
- select {
- case n.doneCh <- future:
- case <-n.shutdownCh:
- return
- }
- case <-n.shutdownCh:
- return
- }
- }
-}
-
-// AppendEntries is used to pipeline a new append entries request.
-func (n *netPipeline) AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error) {
- // Create a new future
- future := &appendFuture{
- start: time.Now(),
- args: args,
- resp: resp,
- }
- future.init()
-
- // Add a send timeout
- if timeout := n.trans.timeout; timeout > 0 {
- n.conn.conn.SetWriteDeadline(time.Now().Add(timeout))
- }
-
- // Send the RPC
- if err := sendRPC(n.conn, rpcAppendEntries, future.args); err != nil {
- return nil, err
- }
-
- // Hand-off for decoding, this can also cause back-pressure
- // to prevent too many inflight requests
- select {
- case n.inprogressCh <- future:
- return future, nil
- case <-n.shutdownCh:
- return nil, ErrPipelineShutdown
- }
-}
-
-// Consumer returns a channel that can be used to consume complete futures.
-func (n *netPipeline) Consumer() <-chan AppendFuture {
- return n.doneCh
-}
-
-// Closed is used to shutdown the pipeline connection.
-func (n *netPipeline) Close() error {
- n.shutdownLock.Lock()
- defer n.shutdownLock.Unlock()
- if n.shutdown {
- return nil
- }
-
- // Release the connection
- n.conn.Release()
-
- n.shutdown = true
- close(n.shutdownCh)
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/observer.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/observer.go
deleted file mode 100644
index 22500fa8..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/observer.go
+++ /dev/null
@@ -1,115 +0,0 @@
-package raft
-
-import (
- "sync/atomic"
-)
-
-// Observation is sent along the given channel to observers when an event occurs.
-type Observation struct {
- // Raft holds the Raft instance generating the observation.
- Raft *Raft
- // Data holds observation-specific data. Possible types are
- // *RequestVoteRequest and RaftState.
- Data interface{}
-}
-
-// nextObserverId is used to provide a unique ID for each observer to aid in
-// deregistration.
-var nextObserverID uint64
-
-// FilterFn is a function that can be registered in order to filter observations.
-// The function reports whether the observation should be included - if
-// it returns false, the observation will be filtered out.
-type FilterFn func(o *Observation) bool
-
-// Observer describes what to do with a given observation.
-type Observer struct {
- // channel receives observations.
- channel chan Observation
-
- // blocking, if true, will cause Raft to block when sending an observation
- // to this observer. This should generally be set to false.
- blocking bool
-
- // filter will be called to determine if an observation should be sent to
- // the channel.
- filter FilterFn
-
- // id is the ID of this observer in the Raft map.
- id uint64
-
- // numObserved and numDropped are performance counters for this observer.
- numObserved uint64
- numDropped uint64
-}
-
-// NewObserver creates a new observer that can be registered
-// to make observations on a Raft instance. Observations
-// will be sent on the given channel if they satisfy the
-// given filter.
-//
-// If blocking is true, the observer will block when it can't
-// send on the channel, otherwise it may discard events.
-func NewObserver(channel chan Observation, blocking bool, filter FilterFn) *Observer {
- return &Observer{
- channel: channel,
- blocking: blocking,
- filter: filter,
- id: atomic.AddUint64(&nextObserverID, 1),
- }
-}
-
-// GetNumObserved returns the number of observations.
-func (or *Observer) GetNumObserved() uint64 {
- return atomic.LoadUint64(&or.numObserved)
-}
-
-// GetNumDropped returns the number of dropped observations due to blocking.
-func (or *Observer) GetNumDropped() uint64 {
- return atomic.LoadUint64(&or.numDropped)
-}
-
-// RegisterObserver registers a new observer.
-func (r *Raft) RegisterObserver(or *Observer) {
- r.observersLock.Lock()
- defer r.observersLock.Unlock()
- r.observers[or.id] = or
-}
-
-// DeregisterObserver deregisters an observer.
-func (r *Raft) DeregisterObserver(or *Observer) {
- r.observersLock.Lock()
- defer r.observersLock.Unlock()
- delete(r.observers, or.id)
-}
-
-// observe sends an observation to every observer.
-func (r *Raft) observe(o interface{}) {
- // In general observers should not block. But in any case this isn't
- // disastrous as we only hold a read lock, which merely prevents
- // registration / deregistration of observers.
- r.observersLock.RLock()
- defer r.observersLock.RUnlock()
- for _, or := range r.observers {
- // It's wasteful to do this in the loop, but for the common case
- // where there are no observers we won't create any objects.
- ob := Observation{Raft: r, Data: o}
- if or.filter != nil && !or.filter(&ob) {
- continue
- }
- if or.channel == nil {
- continue
- }
- if or.blocking {
- or.channel <- ob
- atomic.AddUint64(&or.numObserved, 1)
- } else {
- select {
- case or.channel <- ob:
- atomic.AddUint64(&or.numObserved, 1)
- default:
- atomic.AddUint64(&or.numDropped, 1)
- }
- }
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/peersjson.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/peersjson.go
deleted file mode 100644
index c55fdbb4..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/peersjson.go
+++ /dev/null
@@ -1,46 +0,0 @@
-package raft
-
-import (
- "bytes"
- "encoding/json"
- "io/ioutil"
-)
-
-// ReadPeersJSON consumes a legacy peers.json file in the format of the old JSON
-// peer store and creates a new-style configuration structure. This can be used
-// to migrate this data or perform manual recovery when running protocol versions
-// that can interoperate with older, unversioned Raft servers. This should not be
-// used once server IDs are in use, because the old peers.json file didn't have
-// support for these, nor non-voter suffrage types.
-func ReadPeersJSON(path string) (Configuration, error) {
- // Read in the file.
- buf, err := ioutil.ReadFile(path)
- if err != nil {
- return Configuration{}, err
- }
-
- // Parse it as JSON.
- var peers []string
- dec := json.NewDecoder(bytes.NewReader(buf))
- if err := dec.Decode(&peers); err != nil {
- return Configuration{}, err
- }
-
- // Map it into the new-style configuration structure. We can only specify
- // voter roles here, and the ID has to be the same as the address.
- var configuration Configuration
- for _, peer := range peers {
- server := Server{
- Suffrage: Voter,
- ID: ServerID(peer),
- Address: ServerAddress(peer),
- }
- configuration.Servers = append(configuration.Servers, server)
- }
-
- // We should only ingest valid configurations.
- if err := checkConfiguration(configuration); err != nil {
- return Configuration{}, err
- }
- return configuration, nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/raft.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/raft.go
deleted file mode 100644
index aa8fe820..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/raft.go
+++ /dev/null
@@ -1,1456 +0,0 @@
-package raft
-
-import (
- "bytes"
- "container/list"
- "fmt"
- "io"
- "time"
-
- "github.com/armon/go-metrics"
-)
-
-const (
- minCheckInterval = 10 * time.Millisecond
-)
-
-var (
- keyCurrentTerm = []byte("CurrentTerm")
- keyLastVoteTerm = []byte("LastVoteTerm")
- keyLastVoteCand = []byte("LastVoteCand")
-)
-
-// getRPCHeader returns an initialized RPCHeader struct for the given
-// Raft instance. This structure is sent along with RPC requests and
-// responses.
-func (r *Raft) getRPCHeader() RPCHeader {
- return RPCHeader{
- ProtocolVersion: r.conf.ProtocolVersion,
- }
-}
-
-// checkRPCHeader houses logic about whether this instance of Raft can process
-// the given RPC message.
-func (r *Raft) checkRPCHeader(rpc RPC) error {
- // Get the header off the RPC message.
- wh, ok := rpc.Command.(WithRPCHeader)
- if !ok {
- return fmt.Errorf("RPC does not have a header")
- }
- header := wh.GetRPCHeader()
-
- // First check is to just make sure the code can understand the
- // protocol at all.
- if header.ProtocolVersion < ProtocolVersionMin ||
- header.ProtocolVersion > ProtocolVersionMax {
- return ErrUnsupportedProtocol
- }
-
- // Second check is whether we should support this message, given the
- // current protocol we are configured to run. This will drop support
- // for protocol version 0 starting at protocol version 2, which is
- // currently what we want, and in general support one version back. We
- // may need to revisit this policy depending on how future protocol
- // changes evolve.
- if header.ProtocolVersion < r.conf.ProtocolVersion-1 {
- return ErrUnsupportedProtocol
- }
-
- return nil
-}
-
-// getSnapshotVersion returns the snapshot version that should be used when
-// creating snapshots, given the protocol version in use.
-func getSnapshotVersion(protocolVersion ProtocolVersion) SnapshotVersion {
- // Right now we only have two versions and they are backwards compatible
- // so we don't need to look at the protocol version.
- return 1
-}
-
-// commitTuple is used to send an index that was committed,
-// with an optional associated future that should be invoked.
-type commitTuple struct {
- log *Log
- future *logFuture
-}
-
-// leaderState is state that is used while we are a leader.
-type leaderState struct {
- commitCh chan struct{}
- commitment *commitment
- inflight *list.List // list of logFuture in log index order
- replState map[ServerID]*followerReplication
- notify map[*verifyFuture]struct{}
- stepDown chan struct{}
-}
-
-// setLeader is used to modify the current leader of the cluster
-func (r *Raft) setLeader(leader ServerAddress) {
- r.leaderLock.Lock()
- r.leader = leader
- r.leaderLock.Unlock()
-}
-
-// requestConfigChange is a helper for the above functions that make
-// configuration change requests. 'req' describes the change. For timeout,
-// see AddVoter.
-func (r *Raft) requestConfigChange(req configurationChangeRequest, timeout time.Duration) IndexFuture {
- var timer <-chan time.Time
- if timeout > 0 {
- timer = time.After(timeout)
- }
- future := &configurationChangeFuture{
- req: req,
- }
- future.init()
- select {
- case <-timer:
- return errorFuture{ErrEnqueueTimeout}
- case r.configurationChangeCh <- future:
- return future
- case <-r.shutdownCh:
- return errorFuture{ErrRaftShutdown}
- }
-}
-
-// run is a long running goroutine that runs the Raft FSM.
-func (r *Raft) run() {
- for {
- // Check if we are doing a shutdown
- select {
- case <-r.shutdownCh:
- // Clear the leader to prevent forwarding
- r.setLeader("")
- return
- default:
- }
-
- // Enter into a sub-FSM
- switch r.getState() {
- case Follower:
- r.runFollower()
- case Candidate:
- r.runCandidate()
- case Leader:
- r.runLeader()
- }
- }
-}
-
-// runFollower runs the FSM for a follower.
-func (r *Raft) runFollower() {
- didWarn := false
- r.logger.Printf("[INFO] raft: %v entering Follower state (Leader: %q)", r, r.Leader())
- metrics.IncrCounter([]string{"raft", "state", "follower"}, 1)
- heartbeatTimer := randomTimeout(r.conf.HeartbeatTimeout)
- for {
- select {
- case rpc := <-r.rpcCh:
- r.processRPC(rpc)
-
- case c := <-r.configurationChangeCh:
- // Reject any operations since we are not the leader
- c.respond(ErrNotLeader)
-
- case a := <-r.applyCh:
- // Reject any operations since we are not the leader
- a.respond(ErrNotLeader)
-
- case v := <-r.verifyCh:
- // Reject any operations since we are not the leader
- v.respond(ErrNotLeader)
-
- case r := <-r.userRestoreCh:
- // Reject any restores since we are not the leader
- r.respond(ErrNotLeader)
-
- case c := <-r.configurationsCh:
- c.configurations = r.configurations.Clone()
- c.respond(nil)
-
- case b := <-r.bootstrapCh:
- b.respond(r.liveBootstrap(b.configuration))
-
- case <-heartbeatTimer:
- // Restart the heartbeat timer
- heartbeatTimer = randomTimeout(r.conf.HeartbeatTimeout)
-
- // Check if we have had a successful contact
- lastContact := r.LastContact()
- if time.Now().Sub(lastContact) < r.conf.HeartbeatTimeout {
- continue
- }
-
- // Heartbeat failed! Transition to the candidate state
- lastLeader := r.Leader()
- r.setLeader("")
-
- if r.configurations.latestIndex == 0 {
- if !didWarn {
- r.logger.Printf("[WARN] raft: no known peers, aborting election")
- didWarn = true
- }
- } else if r.configurations.latestIndex == r.configurations.committedIndex &&
- !hasVote(r.configurations.latest, r.localID) {
- if !didWarn {
- r.logger.Printf("[WARN] raft: not part of stable configuration, aborting election")
- didWarn = true
- }
- } else {
- r.logger.Printf(`[WARN] raft: Heartbeat timeout from %q reached, starting election`, lastLeader)
- metrics.IncrCounter([]string{"raft", "transition", "heartbeat_timeout"}, 1)
- r.setState(Candidate)
- return
- }
-
- case <-r.shutdownCh:
- return
- }
- }
-}
-
-// liveBootstrap attempts to seed an initial configuration for the cluster. See
-// the Raft object's member BootstrapCluster for more details. This must only be
-// called on the main thread, and only makes sense in the follower state.
-func (r *Raft) liveBootstrap(configuration Configuration) error {
- // Use the pre-init API to make the static updates.
- err := BootstrapCluster(&r.conf, r.logs, r.stable, r.snapshots,
- r.trans, configuration)
- if err != nil {
- return err
- }
-
- // Make the configuration live.
- var entry Log
- if err := r.logs.GetLog(1, &entry); err != nil {
- panic(err)
- }
- r.setCurrentTerm(1)
- r.setLastLog(entry.Index, entry.Term)
- r.processConfigurationLogEntry(&entry)
- return nil
-}
-
-// runCandidate runs the FSM for a candidate.
-func (r *Raft) runCandidate() {
- r.logger.Printf("[INFO] raft: %v entering Candidate state in term %v",
- r, r.getCurrentTerm()+1)
- metrics.IncrCounter([]string{"raft", "state", "candidate"}, 1)
-
- // Start vote for us, and set a timeout
- voteCh := r.electSelf()
- electionTimer := randomTimeout(r.conf.ElectionTimeout)
-
- // Tally the votes, need a simple majority
- grantedVotes := 0
- votesNeeded := r.quorumSize()
- r.logger.Printf("[DEBUG] raft: Votes needed: %d", votesNeeded)
-
- for r.getState() == Candidate {
- select {
- case rpc := <-r.rpcCh:
- r.processRPC(rpc)
-
- case vote := <-voteCh:
- // Check if the term is greater than ours, bail
- if vote.Term > r.getCurrentTerm() {
- r.logger.Printf("[DEBUG] raft: Newer term discovered, fallback to follower")
- r.setState(Follower)
- r.setCurrentTerm(vote.Term)
- return
- }
-
- // Check if the vote is granted
- if vote.Granted {
- grantedVotes++
- r.logger.Printf("[DEBUG] raft: Vote granted from %s in term %v. Tally: %d",
- vote.voterID, vote.Term, grantedVotes)
- }
-
- // Check if we've become the leader
- if grantedVotes >= votesNeeded {
- r.logger.Printf("[INFO] raft: Election won. Tally: %d", grantedVotes)
- r.setState(Leader)
- r.setLeader(r.localAddr)
- return
- }
-
- case c := <-r.configurationChangeCh:
- // Reject any operations since we are not the leader
- c.respond(ErrNotLeader)
-
- case a := <-r.applyCh:
- // Reject any operations since we are not the leader
- a.respond(ErrNotLeader)
-
- case v := <-r.verifyCh:
- // Reject any operations since we are not the leader
- v.respond(ErrNotLeader)
-
- case r := <-r.userRestoreCh:
- // Reject any restores since we are not the leader
- r.respond(ErrNotLeader)
-
- case c := <-r.configurationsCh:
- c.configurations = r.configurations.Clone()
- c.respond(nil)
-
- case b := <-r.bootstrapCh:
- b.respond(ErrCantBootstrap)
-
- case <-electionTimer:
- // Election failed! Restart the election. We simply return,
- // which will kick us back into runCandidate
- r.logger.Printf("[WARN] raft: Election timeout reached, restarting election")
- return
-
- case <-r.shutdownCh:
- return
- }
- }
-}
-
-// runLeader runs the FSM for a leader. Do the setup here and drop into
-// the leaderLoop for the hot loop.
-func (r *Raft) runLeader() {
- r.logger.Printf("[INFO] raft: %v entering Leader state", r)
- metrics.IncrCounter([]string{"raft", "state", "leader"}, 1)
-
- // Notify that we are the leader
- asyncNotifyBool(r.leaderCh, true)
-
- // Push to the notify channel if given
- if notify := r.conf.NotifyCh; notify != nil {
- select {
- case notify <- true:
- case <-r.shutdownCh:
- }
- }
-
- // Setup leader state
- r.leaderState.commitCh = make(chan struct{}, 1)
- r.leaderState.commitment = newCommitment(r.leaderState.commitCh,
- r.configurations.latest,
- r.getLastIndex()+1 /* first index that may be committed in this term */)
- r.leaderState.inflight = list.New()
- r.leaderState.replState = make(map[ServerID]*followerReplication)
- r.leaderState.notify = make(map[*verifyFuture]struct{})
- r.leaderState.stepDown = make(chan struct{}, 1)
-
- // Cleanup state on step down
- defer func() {
- // Since we were the leader previously, we update our
- // last contact time when we step down, so that we are not
- // reporting a last contact time from before we were the
- // leader. Otherwise, to a client it would seem our data
- // is extremely stale.
- r.setLastContact()
-
- // Stop replication
- for _, p := range r.leaderState.replState {
- close(p.stopCh)
- }
-
- // Respond to all inflight operations
- for e := r.leaderState.inflight.Front(); e != nil; e = e.Next() {
- e.Value.(*logFuture).respond(ErrLeadershipLost)
- }
-
- // Respond to any pending verify requests
- for future := range r.leaderState.notify {
- future.respond(ErrLeadershipLost)
- }
-
- // Clear all the state
- r.leaderState.commitCh = nil
- r.leaderState.commitment = nil
- r.leaderState.inflight = nil
- r.leaderState.replState = nil
- r.leaderState.notify = nil
- r.leaderState.stepDown = nil
-
- // If we are stepping down for some reason, no known leader.
- // We may have stepped down due to an RPC call, which would
- // provide the leader, so we cannot always blank this out.
- r.leaderLock.Lock()
- if r.leader == r.localAddr {
- r.leader = ""
- }
- r.leaderLock.Unlock()
-
- // Notify that we are not the leader
- asyncNotifyBool(r.leaderCh, false)
-
- // Push to the notify channel if given
- if notify := r.conf.NotifyCh; notify != nil {
- select {
- case notify <- false:
- case <-r.shutdownCh:
- // On shutdown, make a best effort but do not block
- select {
- case notify <- false:
- default:
- }
- }
- }
- }()
-
- // Start a replication routine for each peer
- r.startStopReplication()
-
- // Dispatch a no-op log entry first. This gets this leader up to the latest
- // possible commit index, even in the absence of client commands. This used
- // to append a configuration entry instead of a noop. However, that permits
- // an unbounded number of uncommitted configurations in the log. We now
- // maintain that there exists at most one uncommitted configuration entry in
- // any log, so we have to do proper no-ops here.
- noop := &logFuture{
- log: Log{
- Type: LogNoop,
- },
- }
- r.dispatchLogs([]*logFuture{noop})
-
- // Sit in the leader loop until we step down
- r.leaderLoop()
-}
-
-// startStopReplication will set up state and start asynchronous replication to
-// new peers, and stop replication to removed peers. Before removing a peer,
-// it'll instruct the replication routines to try to replicate to the current
-// index. This must only be called from the main thread.
-func (r *Raft) startStopReplication() {
- inConfig := make(map[ServerID]bool, len(r.configurations.latest.Servers))
- lastIdx := r.getLastIndex()
-
- // Start replication goroutines that need starting
- for _, server := range r.configurations.latest.Servers {
- if server.ID == r.localID {
- continue
- }
- inConfig[server.ID] = true
- if _, ok := r.leaderState.replState[server.ID]; !ok {
- r.logger.Printf("[INFO] raft: Added peer %v, starting replication", server.ID)
- s := &followerReplication{
- peer: server,
- commitment: r.leaderState.commitment,
- stopCh: make(chan uint64, 1),
- triggerCh: make(chan struct{}, 1),
- currentTerm: r.getCurrentTerm(),
- nextIndex: lastIdx + 1,
- lastContact: time.Now(),
- notifyCh: make(chan struct{}, 1),
- stepDown: r.leaderState.stepDown,
- }
- r.leaderState.replState[server.ID] = s
- r.goFunc(func() { r.replicate(s) })
- asyncNotifyCh(s.triggerCh)
- }
- }
-
- // Stop replication goroutines that need stopping
- for serverID, repl := range r.leaderState.replState {
- if inConfig[serverID] {
- continue
- }
- // Replicate up to lastIdx and stop
- r.logger.Printf("[INFO] raft: Removed peer %v, stopping replication after %v", serverID, lastIdx)
- repl.stopCh <- lastIdx
- close(repl.stopCh)
- delete(r.leaderState.replState, serverID)
- }
-}
-
-// configurationChangeChIfStable returns r.configurationChangeCh if it's safe
-// to process requests from it, or nil otherwise. This must only be called
-// from the main thread.
-//
-// Note that if the conditions here were to change outside of leaderLoop to take
-// this from nil to non-nil, we would need leaderLoop to be kicked.
-func (r *Raft) configurationChangeChIfStable() chan *configurationChangeFuture {
- // Have to wait until:
- // 1. The latest configuration is committed, and
- // 2. This leader has committed some entry (the noop) in this term
- // https://groups.google.com/forum/#!msg/raft-dev/t4xj6dJTP6E/d2D9LrWRza8J
- if r.configurations.latestIndex == r.configurations.committedIndex &&
- r.getCommitIndex() >= r.leaderState.commitment.startIndex {
- return r.configurationChangeCh
- }
- return nil
-}
-
-// leaderLoop is the hot loop for a leader. It is invoked
-// after all the various leader setup is done.
-func (r *Raft) leaderLoop() {
- // stepDown is used to track if there is an inflight log that
- // would cause us to lose leadership (specifically a RemovePeer of
- // ourselves). If this is the case, we must not allow any logs to
- // be processed in parallel, otherwise we are basing commit on
- // only a single peer (ourself) and replicating to an undefined set
- // of peers.
- stepDown := false
-
- lease := time.After(r.conf.LeaderLeaseTimeout)
- for r.getState() == Leader {
- select {
- case rpc := <-r.rpcCh:
- r.processRPC(rpc)
-
- case <-r.leaderState.stepDown:
- r.setState(Follower)
-
- case <-r.leaderState.commitCh:
- // Process the newly committed entries
- oldCommitIndex := r.getCommitIndex()
- commitIndex := r.leaderState.commitment.getCommitIndex()
- r.setCommitIndex(commitIndex)
-
- if r.configurations.latestIndex > oldCommitIndex &&
- r.configurations.latestIndex <= commitIndex {
- r.configurations.committed = r.configurations.latest
- r.configurations.committedIndex = r.configurations.latestIndex
- if !hasVote(r.configurations.committed, r.localID) {
- stepDown = true
- }
- }
-
- for {
- e := r.leaderState.inflight.Front()
- if e == nil {
- break
- }
- commitLog := e.Value.(*logFuture)
- idx := commitLog.log.Index
- if idx > commitIndex {
- break
- }
- // Measure the commit time
- metrics.MeasureSince([]string{"raft", "commitTime"}, commitLog.dispatch)
- r.processLogs(idx, commitLog)
- r.leaderState.inflight.Remove(e)
- }
-
- if stepDown {
- if r.conf.ShutdownOnRemove {
- r.logger.Printf("[INFO] raft: Removed ourself, shutting down")
- r.Shutdown()
- } else {
- r.logger.Printf("[INFO] raft: Removed ourself, transitioning to follower")
- r.setState(Follower)
- }
- }
-
- case v := <-r.verifyCh:
- if v.quorumSize == 0 {
- // Just dispatched, start the verification
- r.verifyLeader(v)
-
- } else if v.votes < v.quorumSize {
- // Early return, means there must be a new leader
- r.logger.Printf("[WARN] raft: New leader elected, stepping down")
- r.setState(Follower)
- delete(r.leaderState.notify, v)
- v.respond(ErrNotLeader)
-
- } else {
- // Quorum of members agree, we are still leader
- delete(r.leaderState.notify, v)
- v.respond(nil)
- }
-
- case future := <-r.userRestoreCh:
- err := r.restoreUserSnapshot(future.meta, future.reader)
- future.respond(err)
-
- case c := <-r.configurationsCh:
- c.configurations = r.configurations.Clone()
- c.respond(nil)
-
- case future := <-r.configurationChangeChIfStable():
- r.appendConfigurationEntry(future)
-
- case b := <-r.bootstrapCh:
- b.respond(ErrCantBootstrap)
-
- case newLog := <-r.applyCh:
- // Group commit, gather all the ready commits
- ready := []*logFuture{newLog}
- for i := 0; i < r.conf.MaxAppendEntries; i++ {
- select {
- case newLog := <-r.applyCh:
- ready = append(ready, newLog)
- default:
- break
- }
- }
-
- // Dispatch the logs
- if stepDown {
- // we're in the process of stepping down as leader, don't process anything new
- for i := range ready {
- ready[i].respond(ErrNotLeader)
- }
- } else {
- r.dispatchLogs(ready)
- }
-
- case <-lease:
- // Check if we've exceeded the lease, potentially stepping down
- maxDiff := r.checkLeaderLease()
-
- // Next check interval should adjust for the last node we've
- // contacted, without going negative
- checkInterval := r.conf.LeaderLeaseTimeout - maxDiff
- if checkInterval < minCheckInterval {
- checkInterval = minCheckInterval
- }
-
- // Renew the lease timer
- lease = time.After(checkInterval)
-
- case <-r.shutdownCh:
- return
- }
- }
-}
-
-// verifyLeader must be called from the main thread for safety.
-// Causes the followers to attempt an immediate heartbeat.
-func (r *Raft) verifyLeader(v *verifyFuture) {
- // Current leader always votes for self
- v.votes = 1
-
- // Set the quorum size, hot-path for single node
- v.quorumSize = r.quorumSize()
- if v.quorumSize == 1 {
- v.respond(nil)
- return
- }
-
- // Track this request
- v.notifyCh = r.verifyCh
- r.leaderState.notify[v] = struct{}{}
-
- // Trigger immediate heartbeats
- for _, repl := range r.leaderState.replState {
- repl.notifyLock.Lock()
- repl.notify = append(repl.notify, v)
- repl.notifyLock.Unlock()
- asyncNotifyCh(repl.notifyCh)
- }
-}
-
-// checkLeaderLease is used to check if we can contact a quorum of nodes
-// within the last leader lease interval. If not, we need to step down,
-// as we may have lost connectivity. Returns the maximum duration without
-// contact. This must only be called from the main thread.
-func (r *Raft) checkLeaderLease() time.Duration {
- // Track contacted nodes, we can always contact ourself
- contacted := 1
-
- // Check each follower
- var maxDiff time.Duration
- now := time.Now()
- for peer, f := range r.leaderState.replState {
- diff := now.Sub(f.LastContact())
- if diff <= r.conf.LeaderLeaseTimeout {
- contacted++
- if diff > maxDiff {
- maxDiff = diff
- }
- } else {
- // Log at least once at high value, then debug. Otherwise it gets very verbose.
- if diff <= 3*r.conf.LeaderLeaseTimeout {
- r.logger.Printf("[WARN] raft: Failed to contact %v in %v", peer, diff)
- } else {
- r.logger.Printf("[DEBUG] raft: Failed to contact %v in %v", peer, diff)
- }
- }
- metrics.AddSample([]string{"raft", "leader", "lastContact"}, float32(diff/time.Millisecond))
- }
-
- // Verify we can contact a quorum
- quorum := r.quorumSize()
- if contacted < quorum {
- r.logger.Printf("[WARN] raft: Failed to contact quorum of nodes, stepping down")
- r.setState(Follower)
- metrics.IncrCounter([]string{"raft", "transition", "leader_lease_timeout"}, 1)
- }
- return maxDiff
-}
-
-// quorumSize is used to return the quorum size. This must only be called on
-// the main thread.
-// TODO: revisit usage
-func (r *Raft) quorumSize() int {
- voters := 0
- for _, server := range r.configurations.latest.Servers {
- if server.Suffrage == Voter {
- voters++
- }
- }
- return voters/2 + 1
-}
-
-// restoreUserSnapshot is used to manually consume an external snapshot, such
-// as if restoring from a backup. We will use the current Raft configuration,
-// not the one from the snapshot, so that we can restore into a new cluster. We
-// will also use the higher of the index of the snapshot, or the current index,
-// and then add 1 to that, so we force a new state with a hole in the Raft log,
-// so that the snapshot will be sent to followers and used for any new joiners.
-// This can only be run on the leader, and returns a future that can be used to
-// block until complete.
-func (r *Raft) restoreUserSnapshot(meta *SnapshotMeta, reader io.Reader) error {
- defer metrics.MeasureSince([]string{"raft", "restoreUserSnapshot"}, time.Now())
-
- // Sanity check the version.
- version := meta.Version
- if version < SnapshotVersionMin || version > SnapshotVersionMax {
- return fmt.Errorf("unsupported snapshot version %d", version)
- }
-
- // We don't support snapshots while there's a config change
- // outstanding since the snapshot doesn't have a means to
- // represent this state.
- committedIndex := r.configurations.committedIndex
- latestIndex := r.configurations.latestIndex
- if committedIndex != latestIndex {
- return fmt.Errorf("cannot restore snapshot now, wait until the configuration entry at %v has been applied (have applied %v)",
- latestIndex, committedIndex)
- }
-
- // Cancel any inflight requests.
- for {
- e := r.leaderState.inflight.Front()
- if e == nil {
- break
- }
- e.Value.(*logFuture).respond(ErrAbortedByRestore)
- r.leaderState.inflight.Remove(e)
- }
-
- // We will overwrite the snapshot metadata with the current term,
- // an index that's greater than the current index, or the last
- // index in the snapshot. It's important that we leave a hole in
- // the index so we know there's nothing in the Raft log there and
- // replication will fault and send the snapshot.
- term := r.getCurrentTerm()
- lastIndex := r.getLastIndex()
- if meta.Index > lastIndex {
- lastIndex = meta.Index
- }
- lastIndex++
-
- // Dump the snapshot. Note that we use the latest configuration,
- // not the one that came with the snapshot.
- sink, err := r.snapshots.Create(version, lastIndex, term,
- r.configurations.latest, r.configurations.latestIndex, r.trans)
- if err != nil {
- return fmt.Errorf("failed to create snapshot: %v", err)
- }
- n, err := io.Copy(sink, reader)
- if err != nil {
- sink.Cancel()
- return fmt.Errorf("failed to write snapshot: %v", err)
- }
- if n != meta.Size {
- sink.Cancel()
- return fmt.Errorf("failed to write snapshot, size didn't match (%d != %d)", n, meta.Size)
- }
- if err := sink.Close(); err != nil {
- return fmt.Errorf("failed to close snapshot: %v", err)
- }
- r.logger.Printf("[INFO] raft: Copied %d bytes to local snapshot", n)
-
- // Restore the snapshot into the FSM. If this fails we are in a
- // bad state so we panic to take ourselves out.
- fsm := &restoreFuture{ID: sink.ID()}
- fsm.init()
- select {
- case r.fsmMutateCh <- fsm:
- case <-r.shutdownCh:
- return ErrRaftShutdown
- }
- if err := fsm.Error(); err != nil {
- panic(fmt.Errorf("failed to restore snapshot: %v", err))
- }
-
- // We set the last log so it looks like we've stored the empty
- // index we burned. The last applied is set because we made the
- // FSM take the snapshot state, and we store the last snapshot
- // in the stable store since we created a snapshot as part of
- // this process.
- r.setLastLog(lastIndex, term)
- r.setLastApplied(lastIndex)
- r.setLastSnapshot(lastIndex, term)
-
- r.logger.Printf("[INFO] raft: Restored user snapshot (index %d)", lastIndex)
- return nil
-}
-
-// appendConfigurationEntry changes the configuration and adds a new
-// configuration entry to the log. This must only be called from the
-// main thread.
-func (r *Raft) appendConfigurationEntry(future *configurationChangeFuture) {
- configuration, err := nextConfiguration(r.configurations.latest, r.configurations.latestIndex, future.req)
- if err != nil {
- future.respond(err)
- return
- }
-
- r.logger.Printf("[INFO] raft: Updating configuration with %s (%v, %v) to %+v",
- future.req.command, future.req.serverID, future.req.serverAddress, configuration.Servers)
-
- // In pre-ID compatibility mode we translate all configuration changes
- // in to an old remove peer message, which can handle all supported
- // cases for peer changes in the pre-ID world (adding and removing
- // voters). Both add peer and remove peer log entries are handled
- // similarly on old Raft servers, but remove peer does extra checks to
- // see if a leader needs to step down. Since they both assert the full
- // configuration, then we can safely call remove peer for everything.
- if r.protocolVersion < 2 {
- future.log = Log{
- Type: LogRemovePeerDeprecated,
- Data: encodePeers(configuration, r.trans),
- }
- } else {
- future.log = Log{
- Type: LogConfiguration,
- Data: encodeConfiguration(configuration),
- }
- }
-
- r.dispatchLogs([]*logFuture{&future.logFuture})
- index := future.Index()
- r.configurations.latest = configuration
- r.configurations.latestIndex = index
- r.leaderState.commitment.setConfiguration(configuration)
- r.startStopReplication()
-}
-
-// dispatchLog is called on the leader to push a log to disk, mark it
-// as inflight and begin replication of it.
-func (r *Raft) dispatchLogs(applyLogs []*logFuture) {
- now := time.Now()
- defer metrics.MeasureSince([]string{"raft", "leader", "dispatchLog"}, now)
-
- term := r.getCurrentTerm()
- lastIndex := r.getLastIndex()
- logs := make([]*Log, len(applyLogs))
-
- for idx, applyLog := range applyLogs {
- applyLog.dispatch = now
- lastIndex++
- applyLog.log.Index = lastIndex
- applyLog.log.Term = term
- logs[idx] = &applyLog.log
- r.leaderState.inflight.PushBack(applyLog)
- }
-
- // Write the log entry locally
- if err := r.logs.StoreLogs(logs); err != nil {
- r.logger.Printf("[ERR] raft: Failed to commit logs: %v", err)
- for _, applyLog := range applyLogs {
- applyLog.respond(err)
- }
- r.setState(Follower)
- return
- }
- r.leaderState.commitment.match(r.localID, lastIndex)
-
- // Update the last log since it's on disk now
- r.setLastLog(lastIndex, term)
-
- // Notify the replicators of the new log
- for _, f := range r.leaderState.replState {
- asyncNotifyCh(f.triggerCh)
- }
-}
-
-// processLogs is used to apply all the committed entires that haven't been
-// applied up to the given index limit.
-// This can be called from both leaders and followers.
-// Followers call this from AppendEntires, for n entires at a time, and always
-// pass future=nil.
-// Leaders call this once per inflight when entries are committed. They pass
-// the future from inflights.
-func (r *Raft) processLogs(index uint64, future *logFuture) {
- // Reject logs we've applied already
- lastApplied := r.getLastApplied()
- if index <= lastApplied {
- r.logger.Printf("[WARN] raft: Skipping application of old log: %d", index)
- return
- }
-
- // Apply all the preceding logs
- for idx := r.getLastApplied() + 1; idx <= index; idx++ {
- // Get the log, either from the future or from our log store
- if future != nil && future.log.Index == idx {
- r.processLog(&future.log, future)
-
- } else {
- l := new(Log)
- if err := r.logs.GetLog(idx, l); err != nil {
- r.logger.Printf("[ERR] raft: Failed to get log at %d: %v", idx, err)
- panic(err)
- }
- r.processLog(l, nil)
- }
-
- // Update the lastApplied index and term
- r.setLastApplied(idx)
- }
-}
-
-// processLog is invoked to process the application of a single committed log entry.
-func (r *Raft) processLog(l *Log, future *logFuture) {
- switch l.Type {
- case LogBarrier:
- // Barrier is handled by the FSM
- fallthrough
-
- case LogCommand:
- // Forward to the fsm handler
- select {
- case r.fsmMutateCh <- &commitTuple{l, future}:
- case <-r.shutdownCh:
- if future != nil {
- future.respond(ErrRaftShutdown)
- }
- }
-
- // Return so that the future is only responded to
- // by the FSM handler when the application is done
- return
-
- case LogConfiguration:
- case LogAddPeerDeprecated:
- case LogRemovePeerDeprecated:
- case LogNoop:
- // Ignore the no-op
-
- default:
- panic(fmt.Errorf("unrecognized log type: %#v", l))
- }
-
- // Invoke the future if given
- if future != nil {
- future.respond(nil)
- }
-}
-
-// processRPC is called to handle an incoming RPC request. This must only be
-// called from the main thread.
-func (r *Raft) processRPC(rpc RPC) {
- if err := r.checkRPCHeader(rpc); err != nil {
- rpc.Respond(nil, err)
- return
- }
-
- switch cmd := rpc.Command.(type) {
- case *AppendEntriesRequest:
- r.appendEntries(rpc, cmd)
- case *RequestVoteRequest:
- r.requestVote(rpc, cmd)
- case *InstallSnapshotRequest:
- r.installSnapshot(rpc, cmd)
- default:
- r.logger.Printf("[ERR] raft: Got unexpected command: %#v", rpc.Command)
- rpc.Respond(nil, fmt.Errorf("unexpected command"))
- }
-}
-
-// processHeartbeat is a special handler used just for heartbeat requests
-// so that they can be fast-pathed if a transport supports it. This must only
-// be called from the main thread.
-func (r *Raft) processHeartbeat(rpc RPC) {
- defer metrics.MeasureSince([]string{"raft", "rpc", "processHeartbeat"}, time.Now())
-
- // Check if we are shutdown, just ignore the RPC
- select {
- case <-r.shutdownCh:
- return
- default:
- }
-
- // Ensure we are only handling a heartbeat
- switch cmd := rpc.Command.(type) {
- case *AppendEntriesRequest:
- r.appendEntries(rpc, cmd)
- default:
- r.logger.Printf("[ERR] raft: Expected heartbeat, got command: %#v", rpc.Command)
- rpc.Respond(nil, fmt.Errorf("unexpected command"))
- }
-}
-
-// appendEntries is invoked when we get an append entries RPC call. This must
-// only be called from the main thread.
-func (r *Raft) appendEntries(rpc RPC, a *AppendEntriesRequest) {
- defer metrics.MeasureSince([]string{"raft", "rpc", "appendEntries"}, time.Now())
- // Setup a response
- resp := &AppendEntriesResponse{
- RPCHeader: r.getRPCHeader(),
- Term: r.getCurrentTerm(),
- LastLog: r.getLastIndex(),
- Success: false,
- NoRetryBackoff: false,
- }
- var rpcErr error
- defer func() {
- rpc.Respond(resp, rpcErr)
- }()
-
- // Ignore an older term
- if a.Term < r.getCurrentTerm() {
- return
- }
-
- // Increase the term if we see a newer one, also transition to follower
- // if we ever get an appendEntries call
- if a.Term > r.getCurrentTerm() || r.getState() != Follower {
- // Ensure transition to follower
- r.setState(Follower)
- r.setCurrentTerm(a.Term)
- resp.Term = a.Term
- }
-
- // Save the current leader
- r.setLeader(ServerAddress(r.trans.DecodePeer(a.Leader)))
-
- // Verify the last log entry
- if a.PrevLogEntry > 0 {
- lastIdx, lastTerm := r.getLastEntry()
-
- var prevLogTerm uint64
- if a.PrevLogEntry == lastIdx {
- prevLogTerm = lastTerm
-
- } else {
- var prevLog Log
- if err := r.logs.GetLog(a.PrevLogEntry, &prevLog); err != nil {
- r.logger.Printf("[WARN] raft: Failed to get previous log: %d %v (last: %d)",
- a.PrevLogEntry, err, lastIdx)
- resp.NoRetryBackoff = true
- return
- }
- prevLogTerm = prevLog.Term
- }
-
- if a.PrevLogTerm != prevLogTerm {
- r.logger.Printf("[WARN] raft: Previous log term mis-match: ours: %d remote: %d",
- prevLogTerm, a.PrevLogTerm)
- resp.NoRetryBackoff = true
- return
- }
- }
-
- // Process any new entries
- if len(a.Entries) > 0 {
- start := time.Now()
-
- // Delete any conflicting entries, skip any duplicates
- lastLogIdx, _ := r.getLastLog()
- var newEntries []*Log
- for i, entry := range a.Entries {
- if entry.Index > lastLogIdx {
- newEntries = a.Entries[i:]
- break
- }
- var storeEntry Log
- if err := r.logs.GetLog(entry.Index, &storeEntry); err != nil {
- r.logger.Printf("[WARN] raft: Failed to get log entry %d: %v",
- entry.Index, err)
- return
- }
- if entry.Term != storeEntry.Term {
- r.logger.Printf("[WARN] raft: Clearing log suffix from %d to %d", entry.Index, lastLogIdx)
- if err := r.logs.DeleteRange(entry.Index, lastLogIdx); err != nil {
- r.logger.Printf("[ERR] raft: Failed to clear log suffix: %v", err)
- return
- }
- if entry.Index <= r.configurations.latestIndex {
- r.configurations.latest = r.configurations.committed
- r.configurations.latestIndex = r.configurations.committedIndex
- }
- newEntries = a.Entries[i:]
- break
- }
- }
-
- if n := len(newEntries); n > 0 {
- // Append the new entries
- if err := r.logs.StoreLogs(newEntries); err != nil {
- r.logger.Printf("[ERR] raft: Failed to append to logs: %v", err)
- // TODO: leaving r.getLastLog() in the wrong
- // state if there was a truncation above
- return
- }
-
- // Handle any new configuration changes
- for _, newEntry := range newEntries {
- r.processConfigurationLogEntry(newEntry)
- }
-
- // Update the lastLog
- last := newEntries[n-1]
- r.setLastLog(last.Index, last.Term)
- }
-
- metrics.MeasureSince([]string{"raft", "rpc", "appendEntries", "storeLogs"}, start)
- }
-
- // Update the commit index
- if a.LeaderCommitIndex > 0 && a.LeaderCommitIndex > r.getCommitIndex() {
- start := time.Now()
- idx := min(a.LeaderCommitIndex, r.getLastIndex())
- r.setCommitIndex(idx)
- if r.configurations.latestIndex <= idx {
- r.configurations.committed = r.configurations.latest
- r.configurations.committedIndex = r.configurations.latestIndex
- }
- r.processLogs(idx, nil)
- metrics.MeasureSince([]string{"raft", "rpc", "appendEntries", "processLogs"}, start)
- }
-
- // Everything went well, set success
- resp.Success = true
- r.setLastContact()
- return
-}
-
-// processConfigurationLogEntry takes a log entry and updates the latest
-// configuration if the entry results in a new configuration. This must only be
-// called from the main thread, or from NewRaft() before any threads have begun.
-func (r *Raft) processConfigurationLogEntry(entry *Log) {
- if entry.Type == LogConfiguration {
- r.configurations.committed = r.configurations.latest
- r.configurations.committedIndex = r.configurations.latestIndex
- r.configurations.latest = decodeConfiguration(entry.Data)
- r.configurations.latestIndex = entry.Index
- } else if entry.Type == LogAddPeerDeprecated || entry.Type == LogRemovePeerDeprecated {
- r.configurations.committed = r.configurations.latest
- r.configurations.committedIndex = r.configurations.latestIndex
- r.configurations.latest = decodePeers(entry.Data, r.trans)
- r.configurations.latestIndex = entry.Index
- }
-}
-
-// requestVote is invoked when we get an request vote RPC call.
-func (r *Raft) requestVote(rpc RPC, req *RequestVoteRequest) {
- defer metrics.MeasureSince([]string{"raft", "rpc", "requestVote"}, time.Now())
- r.observe(*req)
-
- // Setup a response
- resp := &RequestVoteResponse{
- RPCHeader: r.getRPCHeader(),
- Term: r.getCurrentTerm(),
- Granted: false,
- }
- var rpcErr error
- defer func() {
- rpc.Respond(resp, rpcErr)
- }()
-
- // Version 0 servers will panic unless the peers is present. It's only
- // used on them to produce a warning message.
- if r.protocolVersion < 2 {
- resp.Peers = encodePeers(r.configurations.latest, r.trans)
- }
-
- // Check if we have an existing leader [who's not the candidate]
- candidate := r.trans.DecodePeer(req.Candidate)
- if leader := r.Leader(); leader != "" && leader != candidate {
- r.logger.Printf("[WARN] raft: Rejecting vote request from %v since we have a leader: %v",
- candidate, leader)
- return
- }
-
- // Ignore an older term
- if req.Term < r.getCurrentTerm() {
- return
- }
-
- // Increase the term if we see a newer one
- if req.Term > r.getCurrentTerm() {
- // Ensure transition to follower
- r.setState(Follower)
- r.setCurrentTerm(req.Term)
- resp.Term = req.Term
- }
-
- // Check if we have voted yet
- lastVoteTerm, err := r.stable.GetUint64(keyLastVoteTerm)
- if err != nil && err.Error() != "not found" {
- r.logger.Printf("[ERR] raft: Failed to get last vote term: %v", err)
- return
- }
- lastVoteCandBytes, err := r.stable.Get(keyLastVoteCand)
- if err != nil && err.Error() != "not found" {
- r.logger.Printf("[ERR] raft: Failed to get last vote candidate: %v", err)
- return
- }
-
- // Check if we've voted in this election before
- if lastVoteTerm == req.Term && lastVoteCandBytes != nil {
- r.logger.Printf("[INFO] raft: Duplicate RequestVote for same term: %d", req.Term)
- if bytes.Compare(lastVoteCandBytes, req.Candidate) == 0 {
- r.logger.Printf("[WARN] raft: Duplicate RequestVote from candidate: %s", req.Candidate)
- resp.Granted = true
- }
- return
- }
-
- // Reject if their term is older
- lastIdx, lastTerm := r.getLastEntry()
- if lastTerm > req.LastLogTerm {
- r.logger.Printf("[WARN] raft: Rejecting vote request from %v since our last term is greater (%d, %d)",
- candidate, lastTerm, req.LastLogTerm)
- return
- }
-
- if lastTerm == req.LastLogTerm && lastIdx > req.LastLogIndex {
- r.logger.Printf("[WARN] raft: Rejecting vote request from %v since our last index is greater (%d, %d)",
- candidate, lastIdx, req.LastLogIndex)
- return
- }
-
- // Persist a vote for safety
- if err := r.persistVote(req.Term, req.Candidate); err != nil {
- r.logger.Printf("[ERR] raft: Failed to persist vote: %v", err)
- return
- }
-
- resp.Granted = true
- r.setLastContact()
- return
-}
-
-// installSnapshot is invoked when we get a InstallSnapshot RPC call.
-// We must be in the follower state for this, since it means we are
-// too far behind a leader for log replay. This must only be called
-// from the main thread.
-func (r *Raft) installSnapshot(rpc RPC, req *InstallSnapshotRequest) {
- defer metrics.MeasureSince([]string{"raft", "rpc", "installSnapshot"}, time.Now())
- // Setup a response
- resp := &InstallSnapshotResponse{
- Term: r.getCurrentTerm(),
- Success: false,
- }
- var rpcErr error
- defer func() {
- rpc.Respond(resp, rpcErr)
- }()
-
- // Sanity check the version
- if req.SnapshotVersion < SnapshotVersionMin ||
- req.SnapshotVersion > SnapshotVersionMax {
- rpcErr = fmt.Errorf("unsupported snapshot version %d", req.SnapshotVersion)
- return
- }
-
- // Ignore an older term
- if req.Term < r.getCurrentTerm() {
- return
- }
-
- // Increase the term if we see a newer one
- if req.Term > r.getCurrentTerm() {
- // Ensure transition to follower
- r.setState(Follower)
- r.setCurrentTerm(req.Term)
- resp.Term = req.Term
- }
-
- // Save the current leader
- r.setLeader(ServerAddress(r.trans.DecodePeer(req.Leader)))
-
- // Create a new snapshot
- var reqConfiguration Configuration
- var reqConfigurationIndex uint64
- if req.SnapshotVersion > 0 {
- reqConfiguration = decodeConfiguration(req.Configuration)
- reqConfigurationIndex = req.ConfigurationIndex
- } else {
- reqConfiguration = decodePeers(req.Peers, r.trans)
- reqConfigurationIndex = req.LastLogIndex
- }
- version := getSnapshotVersion(r.protocolVersion)
- sink, err := r.snapshots.Create(version, req.LastLogIndex, req.LastLogTerm,
- reqConfiguration, reqConfigurationIndex, r.trans)
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to create snapshot to install: %v", err)
- rpcErr = fmt.Errorf("failed to create snapshot: %v", err)
- return
- }
-
- // Spill the remote snapshot to disk
- n, err := io.Copy(sink, rpc.Reader)
- if err != nil {
- sink.Cancel()
- r.logger.Printf("[ERR] raft: Failed to copy snapshot: %v", err)
- rpcErr = err
- return
- }
-
- // Check that we received it all
- if n != req.Size {
- sink.Cancel()
- r.logger.Printf("[ERR] raft: Failed to receive whole snapshot: %d / %d", n, req.Size)
- rpcErr = fmt.Errorf("short read")
- return
- }
-
- // Finalize the snapshot
- if err := sink.Close(); err != nil {
- r.logger.Printf("[ERR] raft: Failed to finalize snapshot: %v", err)
- rpcErr = err
- return
- }
- r.logger.Printf("[INFO] raft: Copied %d bytes to local snapshot", n)
-
- // Restore snapshot
- future := &restoreFuture{ID: sink.ID()}
- future.init()
- select {
- case r.fsmMutateCh <- future:
- case <-r.shutdownCh:
- future.respond(ErrRaftShutdown)
- return
- }
-
- // Wait for the restore to happen
- if err := future.Error(); err != nil {
- r.logger.Printf("[ERR] raft: Failed to restore snapshot: %v", err)
- rpcErr = err
- return
- }
-
- // Update the lastApplied so we don't replay old logs
- r.setLastApplied(req.LastLogIndex)
-
- // Update the last stable snapshot info
- r.setLastSnapshot(req.LastLogIndex, req.LastLogTerm)
-
- // Restore the peer set
- r.configurations.latest = reqConfiguration
- r.configurations.latestIndex = reqConfigurationIndex
- r.configurations.committed = reqConfiguration
- r.configurations.committedIndex = reqConfigurationIndex
-
- // Compact logs, continue even if this fails
- if err := r.compactLogs(req.LastLogIndex); err != nil {
- r.logger.Printf("[ERR] raft: Failed to compact logs: %v", err)
- }
-
- r.logger.Printf("[INFO] raft: Installed remote snapshot")
- resp.Success = true
- r.setLastContact()
- return
-}
-
-// setLastContact is used to set the last contact time to now
-func (r *Raft) setLastContact() {
- r.lastContactLock.Lock()
- r.lastContact = time.Now()
- r.lastContactLock.Unlock()
-}
-
-type voteResult struct {
- RequestVoteResponse
- voterID ServerID
-}
-
-// electSelf is used to send a RequestVote RPC to all peers, and vote for
-// ourself. This has the side affecting of incrementing the current term. The
-// response channel returned is used to wait for all the responses (including a
-// vote for ourself). This must only be called from the main thread.
-func (r *Raft) electSelf() <-chan *voteResult {
- // Create a response channel
- respCh := make(chan *voteResult, len(r.configurations.latest.Servers))
-
- // Increment the term
- r.setCurrentTerm(r.getCurrentTerm() + 1)
-
- // Construct the request
- lastIdx, lastTerm := r.getLastEntry()
- req := &RequestVoteRequest{
- RPCHeader: r.getRPCHeader(),
- Term: r.getCurrentTerm(),
- Candidate: r.trans.EncodePeer(r.localAddr),
- LastLogIndex: lastIdx,
- LastLogTerm: lastTerm,
- }
-
- // Construct a function to ask for a vote
- askPeer := func(peer Server) {
- r.goFunc(func() {
- defer metrics.MeasureSince([]string{"raft", "candidate", "electSelf"}, time.Now())
- resp := &voteResult{voterID: peer.ID}
- err := r.trans.RequestVote(peer.Address, req, &resp.RequestVoteResponse)
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to make RequestVote RPC to %v: %v", peer, err)
- resp.Term = req.Term
- resp.Granted = false
- }
- respCh <- resp
- })
- }
-
- // For each peer, request a vote
- for _, server := range r.configurations.latest.Servers {
- if server.Suffrage == Voter {
- if server.ID == r.localID {
- // Persist a vote for ourselves
- if err := r.persistVote(req.Term, req.Candidate); err != nil {
- r.logger.Printf("[ERR] raft: Failed to persist vote : %v", err)
- return nil
- }
- // Include our own vote
- respCh <- &voteResult{
- RequestVoteResponse: RequestVoteResponse{
- RPCHeader: r.getRPCHeader(),
- Term: req.Term,
- Granted: true,
- },
- voterID: r.localID,
- }
- } else {
- askPeer(server)
- }
- }
- }
-
- return respCh
-}
-
-// persistVote is used to persist our vote for safety.
-func (r *Raft) persistVote(term uint64, candidate []byte) error {
- if err := r.stable.SetUint64(keyLastVoteTerm, term); err != nil {
- return err
- }
- if err := r.stable.Set(keyLastVoteCand, candidate); err != nil {
- return err
- }
- return nil
-}
-
-// setCurrentTerm is used to set the current term in a durable manner.
-func (r *Raft) setCurrentTerm(t uint64) {
- // Persist to disk first
- if err := r.stable.SetUint64(keyCurrentTerm, t); err != nil {
- panic(fmt.Errorf("failed to save current term: %v", err))
- }
- r.raftState.setCurrentTerm(t)
-}
-
-// setState is used to update the current state. Any state
-// transition causes the known leader to be cleared. This means
-// that leader should be set only after updating the state.
-func (r *Raft) setState(state RaftState) {
- r.setLeader("")
- oldState := r.raftState.getState()
- r.raftState.setState(state)
- if oldState != state {
- r.observe(state)
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/replication.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/replication.go
deleted file mode 100644
index 68392734..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/replication.go
+++ /dev/null
@@ -1,561 +0,0 @@
-package raft
-
-import (
- "errors"
- "fmt"
- "sync"
- "time"
-
- "github.com/armon/go-metrics"
-)
-
-const (
- maxFailureScale = 12
- failureWait = 10 * time.Millisecond
-)
-
-var (
- // ErrLogNotFound indicates a given log entry is not available.
- ErrLogNotFound = errors.New("log not found")
-
- // ErrPipelineReplicationNotSupported can be returned by the transport to
- // signal that pipeline replication is not supported in general, and that
- // no error message should be produced.
- ErrPipelineReplicationNotSupported = errors.New("pipeline replication not supported")
-)
-
-// followerReplication is in charge of sending snapshots and log entries from
-// this leader during this particular term to a remote follower.
-type followerReplication struct {
- // peer contains the network address and ID of the remote follower.
- peer Server
-
- // commitment tracks the entries acknowledged by followers so that the
- // leader's commit index can advance. It is updated on successsful
- // AppendEntries responses.
- commitment *commitment
-
- // stopCh is notified/closed when this leader steps down or the follower is
- // removed from the cluster. In the follower removed case, it carries a log
- // index; replication should be attempted with a best effort up through that
- // index, before exiting.
- stopCh chan uint64
- // triggerCh is notified every time new entries are appended to the log.
- triggerCh chan struct{}
-
- // currentTerm is the term of this leader, to be included in AppendEntries
- // requests.
- currentTerm uint64
- // nextIndex is the index of the next log entry to send to the follower,
- // which may fall past the end of the log.
- nextIndex uint64
-
- // lastContact is updated to the current time whenever any response is
- // received from the follower (successful or not). This is used to check
- // whether the leader should step down (Raft.checkLeaderLease()).
- lastContact time.Time
- // lastContactLock protects 'lastContact'.
- lastContactLock sync.RWMutex
-
- // failures counts the number of failed RPCs since the last success, which is
- // used to apply backoff.
- failures uint64
-
- // notifyCh is notified to send out a heartbeat, which is used to check that
- // this server is still leader.
- notifyCh chan struct{}
- // notify is a list of futures to be resolved upon receipt of an
- // acknowledgement, then cleared from this list.
- notify []*verifyFuture
- // notifyLock protects 'notify'.
- notifyLock sync.Mutex
-
- // stepDown is used to indicate to the leader that we
- // should step down based on information from a follower.
- stepDown chan struct{}
-
- // allowPipeline is used to determine when to pipeline the AppendEntries RPCs.
- // It is private to this replication goroutine.
- allowPipeline bool
-}
-
-// notifyAll is used to notify all the waiting verify futures
-// if the follower believes we are still the leader.
-func (s *followerReplication) notifyAll(leader bool) {
- // Clear the waiting notifies minimizing lock time
- s.notifyLock.Lock()
- n := s.notify
- s.notify = nil
- s.notifyLock.Unlock()
-
- // Submit our votes
- for _, v := range n {
- v.vote(leader)
- }
-}
-
-// LastContact returns the time of last contact.
-func (s *followerReplication) LastContact() time.Time {
- s.lastContactLock.RLock()
- last := s.lastContact
- s.lastContactLock.RUnlock()
- return last
-}
-
-// setLastContact sets the last contact to the current time.
-func (s *followerReplication) setLastContact() {
- s.lastContactLock.Lock()
- s.lastContact = time.Now()
- s.lastContactLock.Unlock()
-}
-
-// replicate is a long running routine that replicates log entries to a single
-// follower.
-func (r *Raft) replicate(s *followerReplication) {
- // Start an async heartbeating routing
- stopHeartbeat := make(chan struct{})
- defer close(stopHeartbeat)
- r.goFunc(func() { r.heartbeat(s, stopHeartbeat) })
-
-RPC:
- shouldStop := false
- for !shouldStop {
- select {
- case maxIndex := <-s.stopCh:
- // Make a best effort to replicate up to this index
- if maxIndex > 0 {
- r.replicateTo(s, maxIndex)
- }
- return
- case <-s.triggerCh:
- lastLogIdx, _ := r.getLastLog()
- shouldStop = r.replicateTo(s, lastLogIdx)
- case <-randomTimeout(r.conf.CommitTimeout): // TODO: what is this?
- lastLogIdx, _ := r.getLastLog()
- shouldStop = r.replicateTo(s, lastLogIdx)
- }
-
- // If things looks healthy, switch to pipeline mode
- if !shouldStop && s.allowPipeline {
- goto PIPELINE
- }
- }
- return
-
-PIPELINE:
- // Disable until re-enabled
- s.allowPipeline = false
-
- // Replicates using a pipeline for high performance. This method
- // is not able to gracefully recover from errors, and so we fall back
- // to standard mode on failure.
- if err := r.pipelineReplicate(s); err != nil {
- if err != ErrPipelineReplicationNotSupported {
- r.logger.Printf("[ERR] raft: Failed to start pipeline replication to %s: %s", s.peer, err)
- }
- }
- goto RPC
-}
-
-// replicateTo is a hepler to replicate(), used to replicate the logs up to a
-// given last index.
-// If the follower log is behind, we take care to bring them up to date.
-func (r *Raft) replicateTo(s *followerReplication, lastIndex uint64) (shouldStop bool) {
- // Create the base request
- var req AppendEntriesRequest
- var resp AppendEntriesResponse
- var start time.Time
-START:
- // Prevent an excessive retry rate on errors
- if s.failures > 0 {
- select {
- case <-time.After(backoff(failureWait, s.failures, maxFailureScale)):
- case <-r.shutdownCh:
- }
- }
-
- // Setup the request
- if err := r.setupAppendEntries(s, &req, s.nextIndex, lastIndex); err == ErrLogNotFound {
- goto SEND_SNAP
- } else if err != nil {
- return
- }
-
- // Make the RPC call
- start = time.Now()
- if err := r.trans.AppendEntries(s.peer.Address, &req, &resp); err != nil {
- r.logger.Printf("[ERR] raft: Failed to AppendEntries to %v: %v", s.peer, err)
- s.failures++
- return
- }
- appendStats(string(s.peer.ID), start, float32(len(req.Entries)))
-
- // Check for a newer term, stop running
- if resp.Term > req.Term {
- r.handleStaleTerm(s)
- return true
- }
-
- // Update the last contact
- s.setLastContact()
-
- // Update s based on success
- if resp.Success {
- // Update our replication state
- updateLastAppended(s, &req)
-
- // Clear any failures, allow pipelining
- s.failures = 0
- s.allowPipeline = true
- } else {
- s.nextIndex = max(min(s.nextIndex-1, resp.LastLog+1), 1)
- if resp.NoRetryBackoff {
- s.failures = 0
- } else {
- s.failures++
- }
- r.logger.Printf("[WARN] raft: AppendEntries to %v rejected, sending older logs (next: %d)", s.peer, s.nextIndex)
- }
-
-CHECK_MORE:
- // Poll the stop channel here in case we are looping and have been asked
- // to stop, or have stepped down as leader. Even for the best effort case
- // where we are asked to replicate to a given index and then shutdown,
- // it's better to not loop in here to send lots of entries to a straggler
- // that's leaving the cluster anyways.
- select {
- case <-s.stopCh:
- return true
- default:
- }
-
- // Check if there are more logs to replicate
- if s.nextIndex <= lastIndex {
- goto START
- }
- return
-
- // SEND_SNAP is used when we fail to get a log, usually because the follower
- // is too far behind, and we must ship a snapshot down instead
-SEND_SNAP:
- if stop, err := r.sendLatestSnapshot(s); stop {
- return true
- } else if err != nil {
- r.logger.Printf("[ERR] raft: Failed to send snapshot to %v: %v", s.peer, err)
- return
- }
-
- // Check if there is more to replicate
- goto CHECK_MORE
-}
-
-// sendLatestSnapshot is used to send the latest snapshot we have
-// down to our follower.
-func (r *Raft) sendLatestSnapshot(s *followerReplication) (bool, error) {
- // Get the snapshots
- snapshots, err := r.snapshots.List()
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to list snapshots: %v", err)
- return false, err
- }
-
- // Check we have at least a single snapshot
- if len(snapshots) == 0 {
- return false, fmt.Errorf("no snapshots found")
- }
-
- // Open the most recent snapshot
- snapID := snapshots[0].ID
- meta, snapshot, err := r.snapshots.Open(snapID)
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to open snapshot %v: %v", snapID, err)
- return false, err
- }
- defer snapshot.Close()
-
- // Setup the request
- req := InstallSnapshotRequest{
- RPCHeader: r.getRPCHeader(),
- SnapshotVersion: meta.Version,
- Term: s.currentTerm,
- Leader: r.trans.EncodePeer(r.localAddr),
- LastLogIndex: meta.Index,
- LastLogTerm: meta.Term,
- Peers: meta.Peers,
- Size: meta.Size,
- Configuration: encodeConfiguration(meta.Configuration),
- ConfigurationIndex: meta.ConfigurationIndex,
- }
-
- // Make the call
- start := time.Now()
- var resp InstallSnapshotResponse
- if err := r.trans.InstallSnapshot(s.peer.Address, &req, &resp, snapshot); err != nil {
- r.logger.Printf("[ERR] raft: Failed to install snapshot %v: %v", snapID, err)
- s.failures++
- return false, err
- }
- metrics.MeasureSince([]string{"raft", "replication", "installSnapshot", string(s.peer.ID)}, start)
-
- // Check for a newer term, stop running
- if resp.Term > req.Term {
- r.handleStaleTerm(s)
- return true, nil
- }
-
- // Update the last contact
- s.setLastContact()
-
- // Check for success
- if resp.Success {
- // Update the indexes
- s.nextIndex = meta.Index + 1
- s.commitment.match(s.peer.ID, meta.Index)
-
- // Clear any failures
- s.failures = 0
-
- // Notify we are still leader
- s.notifyAll(true)
- } else {
- s.failures++
- r.logger.Printf("[WARN] raft: InstallSnapshot to %v rejected", s.peer)
- }
- return false, nil
-}
-
-// heartbeat is used to periodically invoke AppendEntries on a peer
-// to ensure they don't time out. This is done async of replicate(),
-// since that routine could potentially be blocked on disk IO.
-func (r *Raft) heartbeat(s *followerReplication, stopCh chan struct{}) {
- var failures uint64
- req := AppendEntriesRequest{
- RPCHeader: r.getRPCHeader(),
- Term: s.currentTerm,
- Leader: r.trans.EncodePeer(r.localAddr),
- }
- var resp AppendEntriesResponse
- for {
- // Wait for the next heartbeat interval or forced notify
- select {
- case <-s.notifyCh:
- case <-randomTimeout(r.conf.HeartbeatTimeout / 10):
- case <-stopCh:
- return
- }
-
- start := time.Now()
- if err := r.trans.AppendEntries(s.peer.Address, &req, &resp); err != nil {
- r.logger.Printf("[ERR] raft: Failed to heartbeat to %v: %v", s.peer.Address, err)
- failures++
- select {
- case <-time.After(backoff(failureWait, failures, maxFailureScale)):
- case <-stopCh:
- }
- } else {
- s.setLastContact()
- failures = 0
- metrics.MeasureSince([]string{"raft", "replication", "heartbeat", string(s.peer.ID)}, start)
- s.notifyAll(resp.Success)
- }
- }
-}
-
-// pipelineReplicate is used when we have synchronized our state with the follower,
-// and want to switch to a higher performance pipeline mode of replication.
-// We only pipeline AppendEntries commands, and if we ever hit an error, we fall
-// back to the standard replication which can handle more complex situations.
-func (r *Raft) pipelineReplicate(s *followerReplication) error {
- // Create a new pipeline
- pipeline, err := r.trans.AppendEntriesPipeline(s.peer.Address)
- if err != nil {
- return err
- }
- defer pipeline.Close()
-
- // Log start and stop of pipeline
- r.logger.Printf("[INFO] raft: pipelining replication to peer %v", s.peer)
- defer r.logger.Printf("[INFO] raft: aborting pipeline replication to peer %v", s.peer)
-
- // Create a shutdown and finish channel
- stopCh := make(chan struct{})
- finishCh := make(chan struct{})
-
- // Start a dedicated decoder
- r.goFunc(func() { r.pipelineDecode(s, pipeline, stopCh, finishCh) })
-
- // Start pipeline sends at the last good nextIndex
- nextIndex := s.nextIndex
-
- shouldStop := false
-SEND:
- for !shouldStop {
- select {
- case <-finishCh:
- break SEND
- case maxIndex := <-s.stopCh:
- // Make a best effort to replicate up to this index
- if maxIndex > 0 {
- r.pipelineSend(s, pipeline, &nextIndex, maxIndex)
- }
- break SEND
- case <-s.triggerCh:
- lastLogIdx, _ := r.getLastLog()
- shouldStop = r.pipelineSend(s, pipeline, &nextIndex, lastLogIdx)
- case <-randomTimeout(r.conf.CommitTimeout):
- lastLogIdx, _ := r.getLastLog()
- shouldStop = r.pipelineSend(s, pipeline, &nextIndex, lastLogIdx)
- }
- }
-
- // Stop our decoder, and wait for it to finish
- close(stopCh)
- select {
- case <-finishCh:
- case <-r.shutdownCh:
- }
- return nil
-}
-
-// pipelineSend is used to send data over a pipeline. It is a helper to
-// pipelineReplicate.
-func (r *Raft) pipelineSend(s *followerReplication, p AppendPipeline, nextIdx *uint64, lastIndex uint64) (shouldStop bool) {
- // Create a new append request
- req := new(AppendEntriesRequest)
- if err := r.setupAppendEntries(s, req, *nextIdx, lastIndex); err != nil {
- return true
- }
-
- // Pipeline the append entries
- if _, err := p.AppendEntries(req, new(AppendEntriesResponse)); err != nil {
- r.logger.Printf("[ERR] raft: Failed to pipeline AppendEntries to %v: %v", s.peer, err)
- return true
- }
-
- // Increase the next send log to avoid re-sending old logs
- if n := len(req.Entries); n > 0 {
- last := req.Entries[n-1]
- *nextIdx = last.Index + 1
- }
- return false
-}
-
-// pipelineDecode is used to decode the responses of pipelined requests.
-func (r *Raft) pipelineDecode(s *followerReplication, p AppendPipeline, stopCh, finishCh chan struct{}) {
- defer close(finishCh)
- respCh := p.Consumer()
- for {
- select {
- case ready := <-respCh:
- req, resp := ready.Request(), ready.Response()
- appendStats(string(s.peer.ID), ready.Start(), float32(len(req.Entries)))
-
- // Check for a newer term, stop running
- if resp.Term > req.Term {
- r.handleStaleTerm(s)
- return
- }
-
- // Update the last contact
- s.setLastContact()
-
- // Abort pipeline if not successful
- if !resp.Success {
- return
- }
-
- // Update our replication state
- updateLastAppended(s, req)
- case <-stopCh:
- return
- }
- }
-}
-
-// setupAppendEntries is used to setup an append entries request.
-func (r *Raft) setupAppendEntries(s *followerReplication, req *AppendEntriesRequest, nextIndex, lastIndex uint64) error {
- req.RPCHeader = r.getRPCHeader()
- req.Term = s.currentTerm
- req.Leader = r.trans.EncodePeer(r.localAddr)
- req.LeaderCommitIndex = r.getCommitIndex()
- if err := r.setPreviousLog(req, nextIndex); err != nil {
- return err
- }
- if err := r.setNewLogs(req, nextIndex, lastIndex); err != nil {
- return err
- }
- return nil
-}
-
-// setPreviousLog is used to setup the PrevLogEntry and PrevLogTerm for an
-// AppendEntriesRequest given the next index to replicate.
-func (r *Raft) setPreviousLog(req *AppendEntriesRequest, nextIndex uint64) error {
- // Guard for the first index, since there is no 0 log entry
- // Guard against the previous index being a snapshot as well
- lastSnapIdx, lastSnapTerm := r.getLastSnapshot()
- if nextIndex == 1 {
- req.PrevLogEntry = 0
- req.PrevLogTerm = 0
-
- } else if (nextIndex - 1) == lastSnapIdx {
- req.PrevLogEntry = lastSnapIdx
- req.PrevLogTerm = lastSnapTerm
-
- } else {
- var l Log
- if err := r.logs.GetLog(nextIndex-1, &l); err != nil {
- r.logger.Printf("[ERR] raft: Failed to get log at index %d: %v",
- nextIndex-1, err)
- return err
- }
-
- // Set the previous index and term (0 if nextIndex is 1)
- req.PrevLogEntry = l.Index
- req.PrevLogTerm = l.Term
- }
- return nil
-}
-
-// setNewLogs is used to setup the logs which should be appended for a request.
-func (r *Raft) setNewLogs(req *AppendEntriesRequest, nextIndex, lastIndex uint64) error {
- // Append up to MaxAppendEntries or up to the lastIndex
- req.Entries = make([]*Log, 0, r.conf.MaxAppendEntries)
- maxIndex := min(nextIndex+uint64(r.conf.MaxAppendEntries)-1, lastIndex)
- for i := nextIndex; i <= maxIndex; i++ {
- oldLog := new(Log)
- if err := r.logs.GetLog(i, oldLog); err != nil {
- r.logger.Printf("[ERR] raft: Failed to get log at index %d: %v", i, err)
- return err
- }
- req.Entries = append(req.Entries, oldLog)
- }
- return nil
-}
-
-// appendStats is used to emit stats about an AppendEntries invocation.
-func appendStats(peer string, start time.Time, logs float32) {
- metrics.MeasureSince([]string{"raft", "replication", "appendEntries", "rpc", peer}, start)
- metrics.IncrCounter([]string{"raft", "replication", "appendEntries", "logs", peer}, logs)
-}
-
-// handleStaleTerm is used when a follower indicates that we have a stale term.
-func (r *Raft) handleStaleTerm(s *followerReplication) {
- r.logger.Printf("[ERR] raft: peer %v has newer term, stopping replication", s.peer)
- s.notifyAll(false) // No longer leader
- asyncNotifyCh(s.stepDown)
-}
-
-// updateLastAppended is used to update follower replication state after a
-// successful AppendEntries RPC.
-// TODO: This isn't used during InstallSnapshot, but the code there is similar.
-func updateLastAppended(s *followerReplication, req *AppendEntriesRequest) {
- // Mark any inflight logs as committed
- if logs := req.Entries; len(logs) > 0 {
- last := logs[len(logs)-1]
- s.nextIndex = last.Index + 1
- s.commitment.match(s.peer.ID, last.Index)
- }
-
- // Notify still leader
- s.notifyAll(true)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/snapshot.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/snapshot.go
deleted file mode 100644
index 5287ebc4..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/snapshot.go
+++ /dev/null
@@ -1,239 +0,0 @@
-package raft
-
-import (
- "fmt"
- "io"
- "time"
-
- "github.com/armon/go-metrics"
-)
-
-// SnapshotMeta is for metadata of a snapshot.
-type SnapshotMeta struct {
- // Version is the version number of the snapshot metadata. This does not cover
- // the application's data in the snapshot, that should be versioned
- // separately.
- Version SnapshotVersion
-
- // ID is opaque to the store, and is used for opening.
- ID string
-
- // Index and Term store when the snapshot was taken.
- Index uint64
- Term uint64
-
- // Peers is deprecated and used to support version 0 snapshots, but will
- // be populated in version 1 snapshots as well to help with upgrades.
- Peers []byte
-
- // Configuration and ConfigurationIndex are present in version 1
- // snapshots and later.
- Configuration Configuration
- ConfigurationIndex uint64
-
- // Size is the size of the snapshot in bytes.
- Size int64
-}
-
-// SnapshotStore interface is used to allow for flexible implementations
-// of snapshot storage and retrieval. For example, a client could implement
-// a shared state store such as S3, allowing new nodes to restore snapshots
-// without streaming from the leader.
-type SnapshotStore interface {
- // Create is used to begin a snapshot at a given index and term, and with
- // the given committed configuration. The version parameter controls
- // which snapshot version to create.
- Create(version SnapshotVersion, index, term uint64, configuration Configuration,
- configurationIndex uint64, trans Transport) (SnapshotSink, error)
-
- // List is used to list the available snapshots in the store.
- // It should return then in descending order, with the highest index first.
- List() ([]*SnapshotMeta, error)
-
- // Open takes a snapshot ID and provides a ReadCloser. Once close is
- // called it is assumed the snapshot is no longer needed.
- Open(id string) (*SnapshotMeta, io.ReadCloser, error)
-}
-
-// SnapshotSink is returned by StartSnapshot. The FSM will Write state
-// to the sink and call Close on completion. On error, Cancel will be invoked.
-type SnapshotSink interface {
- io.WriteCloser
- ID() string
- Cancel() error
-}
-
-// runSnapshots is a long running goroutine used to manage taking
-// new snapshots of the FSM. It runs in parallel to the FSM and
-// main goroutines, so that snapshots do not block normal operation.
-func (r *Raft) runSnapshots() {
- for {
- select {
- case <-randomTimeout(r.conf.SnapshotInterval):
- // Check if we should snapshot
- if !r.shouldSnapshot() {
- continue
- }
-
- // Trigger a snapshot
- if _, err := r.takeSnapshot(); err != nil {
- r.logger.Printf("[ERR] raft: Failed to take snapshot: %v", err)
- }
-
- case future := <-r.userSnapshotCh:
- // User-triggered, run immediately
- id, err := r.takeSnapshot()
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to take snapshot: %v", err)
- } else {
- future.opener = func() (*SnapshotMeta, io.ReadCloser, error) {
- return r.snapshots.Open(id)
- }
- }
- future.respond(err)
-
- case <-r.shutdownCh:
- return
- }
- }
-}
-
-// shouldSnapshot checks if we meet the conditions to take
-// a new snapshot.
-func (r *Raft) shouldSnapshot() bool {
- // Check the last snapshot index
- lastSnap, _ := r.getLastSnapshot()
-
- // Check the last log index
- lastIdx, err := r.logs.LastIndex()
- if err != nil {
- r.logger.Printf("[ERR] raft: Failed to get last log index: %v", err)
- return false
- }
-
- // Compare the delta to the threshold
- delta := lastIdx - lastSnap
- return delta >= r.conf.SnapshotThreshold
-}
-
-// takeSnapshot is used to take a new snapshot. This must only be called from
-// the snapshot thread, never the main thread. This returns the ID of the new
-// snapshot, along with an error.
-func (r *Raft) takeSnapshot() (string, error) {
- defer metrics.MeasureSince([]string{"raft", "snapshot", "takeSnapshot"}, time.Now())
-
- // Create a request for the FSM to perform a snapshot.
- snapReq := &reqSnapshotFuture{}
- snapReq.init()
-
- // Wait for dispatch or shutdown.
- select {
- case r.fsmSnapshotCh <- snapReq:
- case <-r.shutdownCh:
- return "", ErrRaftShutdown
- }
-
- // Wait until we get a response
- if err := snapReq.Error(); err != nil {
- if err != ErrNothingNewToSnapshot {
- err = fmt.Errorf("failed to start snapshot: %v", err)
- }
- return "", err
- }
- defer snapReq.snapshot.Release()
-
- // Make a request for the configurations and extract the committed info.
- // We have to use the future here to safely get this information since
- // it is owned by the main thread.
- configReq := &configurationsFuture{}
- configReq.init()
- select {
- case r.configurationsCh <- configReq:
- case <-r.shutdownCh:
- return "", ErrRaftShutdown
- }
- if err := configReq.Error(); err != nil {
- return "", err
- }
- committed := configReq.configurations.committed
- committedIndex := configReq.configurations.committedIndex
-
- // We don't support snapshots while there's a config change outstanding
- // since the snapshot doesn't have a means to represent this state. This
- // is a little weird because we need the FSM to apply an index that's
- // past the configuration change, even though the FSM itself doesn't see
- // the configuration changes. It should be ok in practice with normal
- // application traffic flowing through the FSM. If there's none of that
- // then it's not crucial that we snapshot, since there's not much going
- // on Raft-wise.
- if snapReq.index < committedIndex {
- return "", fmt.Errorf("cannot take snapshot now, wait until the configuration entry at %v has been applied (have applied %v)",
- committedIndex, snapReq.index)
- }
-
- // Create a new snapshot.
- r.logger.Printf("[INFO] raft: Starting snapshot up to %d", snapReq.index)
- start := time.Now()
- version := getSnapshotVersion(r.protocolVersion)
- sink, err := r.snapshots.Create(version, snapReq.index, snapReq.term, committed, committedIndex, r.trans)
- if err != nil {
- return "", fmt.Errorf("failed to create snapshot: %v", err)
- }
- metrics.MeasureSince([]string{"raft", "snapshot", "create"}, start)
-
- // Try to persist the snapshot.
- start = time.Now()
- if err := snapReq.snapshot.Persist(sink); err != nil {
- sink.Cancel()
- return "", fmt.Errorf("failed to persist snapshot: %v", err)
- }
- metrics.MeasureSince([]string{"raft", "snapshot", "persist"}, start)
-
- // Close and check for error.
- if err := sink.Close(); err != nil {
- return "", fmt.Errorf("failed to close snapshot: %v", err)
- }
-
- // Update the last stable snapshot info.
- r.setLastSnapshot(snapReq.index, snapReq.term)
-
- // Compact the logs.
- if err := r.compactLogs(snapReq.index); err != nil {
- return "", err
- }
-
- r.logger.Printf("[INFO] raft: Snapshot to %d complete", snapReq.index)
- return sink.ID(), nil
-}
-
-// compactLogs takes the last inclusive index of a snapshot
-// and trims the logs that are no longer needed.
-func (r *Raft) compactLogs(snapIdx uint64) error {
- defer metrics.MeasureSince([]string{"raft", "compactLogs"}, time.Now())
- // Determine log ranges to compact
- minLog, err := r.logs.FirstIndex()
- if err != nil {
- return fmt.Errorf("failed to get first log index: %v", err)
- }
-
- // Check if we have enough logs to truncate
- lastLogIdx, _ := r.getLastLog()
- if lastLogIdx <= r.conf.TrailingLogs {
- return nil
- }
-
- // Truncate up to the end of the snapshot, or `TrailingLogs`
- // back from the head, which ever is further back. This ensures
- // at least `TrailingLogs` entries, but does not allow logs
- // after the snapshot to be removed.
- maxLog := min(snapIdx, lastLogIdx-r.conf.TrailingLogs)
-
- // Log this
- r.logger.Printf("[INFO] raft: Compacting logs from %d to %d", minLog, maxLog)
-
- // Compact the logs
- if err := r.logs.DeleteRange(minLog, maxLog); err != nil {
- return fmt.Errorf("log compaction failed: %v", err)
- }
- return nil
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/stable.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/stable.go
deleted file mode 100644
index ff59a8c5..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/stable.go
+++ /dev/null
@@ -1,15 +0,0 @@
-package raft
-
-// StableStore is used to provide stable storage
-// of key configurations to ensure safety.
-type StableStore interface {
- Set(key []byte, val []byte) error
-
- // Get returns the value for key, or an empty byte slice if key was not found.
- Get(key []byte) ([]byte, error)
-
- SetUint64(key []byte, val uint64) error
-
- // GetUint64 returns the uint64 value for key, or 0 if key was not found.
- GetUint64(key []byte) (uint64, error)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/state.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/state.go
deleted file mode 100644
index f6d658b8..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/state.go
+++ /dev/null
@@ -1,167 +0,0 @@
-package raft
-
-import (
- "sync"
- "sync/atomic"
-)
-
-// RaftState captures the state of a Raft node: Follower, Candidate, Leader,
-// or Shutdown.
-type RaftState uint32
-
-const (
- // Follower is the initial state of a Raft node.
- Follower RaftState = iota
-
- // Candidate is one of the valid states of a Raft node.
- Candidate
-
- // Leader is one of the valid states of a Raft node.
- Leader
-
- // Shutdown is the terminal state of a Raft node.
- Shutdown
-)
-
-func (s RaftState) String() string {
- switch s {
- case Follower:
- return "Follower"
- case Candidate:
- return "Candidate"
- case Leader:
- return "Leader"
- case Shutdown:
- return "Shutdown"
- default:
- return "Unknown"
- }
-}
-
-// raftState is used to maintain various state variables
-// and provides an interface to set/get the variables in a
-// thread safe manner.
-type raftState struct {
- // The current term, cache of StableStore
- currentTerm uint64
-
- // Highest committed log entry
- commitIndex uint64
-
- // Last applied log to the FSM
- lastApplied uint64
-
- // protects 4 next fields
- lastLock sync.Mutex
-
- // Cache the latest snapshot index/term
- lastSnapshotIndex uint64
- lastSnapshotTerm uint64
-
- // Cache the latest log from LogStore
- lastLogIndex uint64
- lastLogTerm uint64
-
- // Tracks running goroutines
- routinesGroup sync.WaitGroup
-
- // The current state
- state RaftState
-}
-
-func (r *raftState) getState() RaftState {
- stateAddr := (*uint32)(&r.state)
- return RaftState(atomic.LoadUint32(stateAddr))
-}
-
-func (r *raftState) setState(s RaftState) {
- stateAddr := (*uint32)(&r.state)
- atomic.StoreUint32(stateAddr, uint32(s))
-}
-
-func (r *raftState) getCurrentTerm() uint64 {
- return atomic.LoadUint64(&r.currentTerm)
-}
-
-func (r *raftState) setCurrentTerm(term uint64) {
- atomic.StoreUint64(&r.currentTerm, term)
-}
-
-func (r *raftState) getLastLog() (index, term uint64) {
- r.lastLock.Lock()
- index = r.lastLogIndex
- term = r.lastLogTerm
- r.lastLock.Unlock()
- return
-}
-
-func (r *raftState) setLastLog(index, term uint64) {
- r.lastLock.Lock()
- r.lastLogIndex = index
- r.lastLogTerm = term
- r.lastLock.Unlock()
-}
-
-func (r *raftState) getLastSnapshot() (index, term uint64) {
- r.lastLock.Lock()
- index = r.lastSnapshotIndex
- term = r.lastSnapshotTerm
- r.lastLock.Unlock()
- return
-}
-
-func (r *raftState) setLastSnapshot(index, term uint64) {
- r.lastLock.Lock()
- r.lastSnapshotIndex = index
- r.lastSnapshotTerm = term
- r.lastLock.Unlock()
-}
-
-func (r *raftState) getCommitIndex() uint64 {
- return atomic.LoadUint64(&r.commitIndex)
-}
-
-func (r *raftState) setCommitIndex(index uint64) {
- atomic.StoreUint64(&r.commitIndex, index)
-}
-
-func (r *raftState) getLastApplied() uint64 {
- return atomic.LoadUint64(&r.lastApplied)
-}
-
-func (r *raftState) setLastApplied(index uint64) {
- atomic.StoreUint64(&r.lastApplied, index)
-}
-
-// Start a goroutine and properly handle the race between a routine
-// starting and incrementing, and exiting and decrementing.
-func (r *raftState) goFunc(f func()) {
- r.routinesGroup.Add(1)
- go func() {
- defer r.routinesGroup.Done()
- f()
- }()
-}
-
-func (r *raftState) waitShutdown() {
- r.routinesGroup.Wait()
-}
-
-// getLastIndex returns the last index in stable storage.
-// Either from the last log or from the last snapshot.
-func (r *raftState) getLastIndex() uint64 {
- r.lastLock.Lock()
- defer r.lastLock.Unlock()
- return max(r.lastLogIndex, r.lastSnapshotIndex)
-}
-
-// getLastEntry returns the last index and term in stable storage.
-// Either from the last log or from the last snapshot.
-func (r *raftState) getLastEntry() (uint64, uint64) {
- r.lastLock.Lock()
- defer r.lastLock.Unlock()
- if r.lastLogIndex >= r.lastSnapshotIndex {
- return r.lastLogIndex, r.lastLogTerm
- }
- return r.lastSnapshotIndex, r.lastSnapshotTerm
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/tcp_transport.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/tcp_transport.go
deleted file mode 100644
index 9281508a..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/tcp_transport.go
+++ /dev/null
@@ -1,105 +0,0 @@
-package raft
-
-import (
- "errors"
- "io"
- "log"
- "net"
- "time"
-)
-
-var (
- errNotAdvertisable = errors.New("local bind address is not advertisable")
- errNotTCP = errors.New("local address is not a TCP address")
-)
-
-// TCPStreamLayer implements StreamLayer interface for plain TCP.
-type TCPStreamLayer struct {
- advertise net.Addr
- listener *net.TCPListener
-}
-
-// NewTCPTransport returns a NetworkTransport that is built on top of
-// a TCP streaming transport layer.
-func NewTCPTransport(
- bindAddr string,
- advertise net.Addr,
- maxPool int,
- timeout time.Duration,
- logOutput io.Writer,
-) (*NetworkTransport, error) {
- return newTCPTransport(bindAddr, advertise, maxPool, timeout, func(stream StreamLayer) *NetworkTransport {
- return NewNetworkTransport(stream, maxPool, timeout, logOutput)
- })
-}
-
-// NewTCPTransportWithLogger returns a NetworkTransport that is built on top of
-// a TCP streaming transport layer, with log output going to the supplied Logger
-func NewTCPTransportWithLogger(
- bindAddr string,
- advertise net.Addr,
- maxPool int,
- timeout time.Duration,
- logger *log.Logger,
-) (*NetworkTransport, error) {
- return newTCPTransport(bindAddr, advertise, maxPool, timeout, func(stream StreamLayer) *NetworkTransport {
- return NewNetworkTransportWithLogger(stream, maxPool, timeout, logger)
- })
-}
-
-func newTCPTransport(bindAddr string,
- advertise net.Addr,
- maxPool int,
- timeout time.Duration,
- transportCreator func(stream StreamLayer) *NetworkTransport) (*NetworkTransport, error) {
- // Try to bind
- list, err := net.Listen("tcp", bindAddr)
- if err != nil {
- return nil, err
- }
-
- // Create stream
- stream := &TCPStreamLayer{
- advertise: advertise,
- listener: list.(*net.TCPListener),
- }
-
- // Verify that we have a usable advertise address
- addr, ok := stream.Addr().(*net.TCPAddr)
- if !ok {
- list.Close()
- return nil, errNotTCP
- }
- if addr.IP.IsUnspecified() {
- list.Close()
- return nil, errNotAdvertisable
- }
-
- // Create the network transport
- trans := transportCreator(stream)
- return trans, nil
-}
-
-// Dial implements the StreamLayer interface.
-func (t *TCPStreamLayer) Dial(address ServerAddress, timeout time.Duration) (net.Conn, error) {
- return net.DialTimeout("tcp", string(address), timeout)
-}
-
-// Accept implements the net.Listener interface.
-func (t *TCPStreamLayer) Accept() (c net.Conn, err error) {
- return t.listener.Accept()
-}
-
-// Close implements the net.Listener interface.
-func (t *TCPStreamLayer) Close() (err error) {
- return t.listener.Close()
-}
-
-// Addr implements the net.Listener interface.
-func (t *TCPStreamLayer) Addr() net.Addr {
- // Use an advertise addr if provided
- if t.advertise != nil {
- return t.advertise
- }
- return t.listener.Addr()
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/transport.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/transport.go
deleted file mode 100644
index 633f97a8..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/transport.go
+++ /dev/null
@@ -1,124 +0,0 @@
-package raft
-
-import (
- "io"
- "time"
-)
-
-// RPCResponse captures both a response and a potential error.
-type RPCResponse struct {
- Response interface{}
- Error error
-}
-
-// RPC has a command, and provides a response mechanism.
-type RPC struct {
- Command interface{}
- Reader io.Reader // Set only for InstallSnapshot
- RespChan chan<- RPCResponse
-}
-
-// Respond is used to respond with a response, error or both
-func (r *RPC) Respond(resp interface{}, err error) {
- r.RespChan <- RPCResponse{resp, err}
-}
-
-// Transport provides an interface for network transports
-// to allow Raft to communicate with other nodes.
-type Transport interface {
- // Consumer returns a channel that can be used to
- // consume and respond to RPC requests.
- Consumer() <-chan RPC
-
- // LocalAddr is used to return our local address to distinguish from our peers.
- LocalAddr() ServerAddress
-
- // AppendEntriesPipeline returns an interface that can be used to pipeline
- // AppendEntries requests.
- AppendEntriesPipeline(target ServerAddress) (AppendPipeline, error)
-
- // AppendEntries sends the appropriate RPC to the target node.
- AppendEntries(target ServerAddress, args *AppendEntriesRequest, resp *AppendEntriesResponse) error
-
- // RequestVote sends the appropriate RPC to the target node.
- RequestVote(target ServerAddress, args *RequestVoteRequest, resp *RequestVoteResponse) error
-
- // InstallSnapshot is used to push a snapshot down to a follower. The data is read from
- // the ReadCloser and streamed to the client.
- InstallSnapshot(target ServerAddress, args *InstallSnapshotRequest, resp *InstallSnapshotResponse, data io.Reader) error
-
- // EncodePeer is used to serialize a peer's address.
- EncodePeer(ServerAddress) []byte
-
- // DecodePeer is used to deserialize a peer's address.
- DecodePeer([]byte) ServerAddress
-
- // SetHeartbeatHandler is used to setup a heartbeat handler
- // as a fast-pass. This is to avoid head-of-line blocking from
- // disk IO. If a Transport does not support this, it can simply
- // ignore the call, and push the heartbeat onto the Consumer channel.
- SetHeartbeatHandler(cb func(rpc RPC))
-}
-
-// WithClose is an interface that a transport may provide which
-// allows a transport to be shut down cleanly when a Raft instance
-// shuts down.
-//
-// It is defined separately from Transport as unfortunately it wasn't in the
-// original interface specification.
-type WithClose interface {
- // Close permanently closes a transport, stopping
- // any associated goroutines and freeing other resources.
- Close() error
-}
-
-// LoopbackTransport is an interface that provides a loopback transport suitable for testing
-// e.g. InmemTransport. It's there so we don't have to rewrite tests.
-type LoopbackTransport interface {
- Transport // Embedded transport reference
- WithPeers // Embedded peer management
- WithClose // with a close routine
-}
-
-// WithPeers is an interface that a transport may provide which allows for connection and
-// disconnection. Unless the transport is a loopback transport, the transport specified to
-// "Connect" is likely to be nil.
-type WithPeers interface {
- Connect(peer ServerAddress, t Transport) // Connect a peer
- Disconnect(peer ServerAddress) // Disconnect a given peer
- DisconnectAll() // Disconnect all peers, possibly to reconnect them later
-}
-
-// AppendPipeline is used for pipelining AppendEntries requests. It is used
-// to increase the replication throughput by masking latency and better
-// utilizing bandwidth.
-type AppendPipeline interface {
- // AppendEntries is used to add another request to the pipeline.
- // The send may block which is an effective form of back-pressure.
- AppendEntries(args *AppendEntriesRequest, resp *AppendEntriesResponse) (AppendFuture, error)
-
- // Consumer returns a channel that can be used to consume
- // response futures when they are ready.
- Consumer() <-chan AppendFuture
-
- // Close closes the pipeline and cancels all inflight RPCs
- Close() error
-}
-
-// AppendFuture is used to return information about a pipelined AppendEntries request.
-type AppendFuture interface {
- Future
-
- // Start returns the time that the append request was started.
- // It is always OK to call this method.
- Start() time.Time
-
- // Request holds the parameters of the AppendEntries call.
- // It is always OK to call this method.
- Request() *AppendEntriesRequest
-
- // Response holds the results of the AppendEntries call.
- // This method must only be called after the Error
- // method returns, and will only be valid on success.
- Response() *AppendEntriesResponse
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/util.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/util.go
deleted file mode 100644
index 90428d74..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/hashicorp/raft/util.go
+++ /dev/null
@@ -1,133 +0,0 @@
-package raft
-
-import (
- "bytes"
- crand "crypto/rand"
- "fmt"
- "math"
- "math/big"
- "math/rand"
- "time"
-
- "github.com/hashicorp/go-msgpack/codec"
-)
-
-func init() {
- // Ensure we use a high-entropy seed for the psuedo-random generator
- rand.Seed(newSeed())
-}
-
-// returns an int64 from a crypto random source
-// can be used to seed a source for a math/rand.
-func newSeed() int64 {
- r, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
- if err != nil {
- panic(fmt.Errorf("failed to read random bytes: %v", err))
- }
- return r.Int64()
-}
-
-// randomTimeout returns a value that is between the minVal and 2x minVal.
-func randomTimeout(minVal time.Duration) <-chan time.Time {
- if minVal == 0 {
- return nil
- }
- extra := (time.Duration(rand.Int63()) % minVal)
- return time.After(minVal + extra)
-}
-
-// min returns the minimum.
-func min(a, b uint64) uint64 {
- if a <= b {
- return a
- }
- return b
-}
-
-// max returns the maximum.
-func max(a, b uint64) uint64 {
- if a >= b {
- return a
- }
- return b
-}
-
-// generateUUID is used to generate a random UUID.
-func generateUUID() string {
- buf := make([]byte, 16)
- if _, err := crand.Read(buf); err != nil {
- panic(fmt.Errorf("failed to read random bytes: %v", err))
- }
-
- return fmt.Sprintf("%08x-%04x-%04x-%04x-%12x",
- buf[0:4],
- buf[4:6],
- buf[6:8],
- buf[8:10],
- buf[10:16])
-}
-
-// asyncNotifyCh is used to do an async channel send
-// to a single channel without blocking.
-func asyncNotifyCh(ch chan struct{}) {
- select {
- case ch <- struct{}{}:
- default:
- }
-}
-
-// drainNotifyCh empties out a single-item notification channel without
-// blocking, and returns whether it received anything.
-func drainNotifyCh(ch chan struct{}) bool {
- select {
- case <-ch:
- return true
- default:
- return false
- }
-}
-
-// asyncNotifyBool is used to do an async notification
-// on a bool channel.
-func asyncNotifyBool(ch chan bool, v bool) {
- select {
- case ch <- v:
- default:
- }
-}
-
-// Decode reverses the encode operation on a byte slice input.
-func decodeMsgPack(buf []byte, out interface{}) error {
- r := bytes.NewBuffer(buf)
- hd := codec.MsgpackHandle{}
- dec := codec.NewDecoder(r, &hd)
- return dec.Decode(out)
-}
-
-// Encode writes an encoded object to a new bytes buffer.
-func encodeMsgPack(in interface{}) (*bytes.Buffer, error) {
- buf := bytes.NewBuffer(nil)
- hd := codec.MsgpackHandle{}
- enc := codec.NewEncoder(buf, &hd)
- err := enc.Encode(in)
- return buf, err
-}
-
-// backoff is used to compute an exponential backoff
-// duration. Base time is scaled by the current round,
-// up to some maximum scale factor.
-func backoff(base time.Duration, round, limit uint64) time.Duration {
- power := min(round, limit)
- for power > 2 {
- base *= 2
- power--
- }
- return base
-}
-
-// Needed for sorting []uint64, used to determine commitment
-type uint64Slice []uint64
-
-func (p uint64Slice) Len() int { return len(p) }
-func (p uint64Slice) Less(i, j int) bool { return p[i] < p[j] }
-func (p uint64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/README.md
index 287ecb24..8f02cdd0 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/README.md
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/README.md
@@ -3,8 +3,11 @@
cli is a library for implementing powerful command-line interfaces in Go.
cli is the library that powers the CLI for
[Packer](https://github.com/mitchellh/packer),
-[Serf](https://github.com/hashicorp/serf), and
-[Consul](https://github.com/hashicorp/consul).
+[Serf](https://github.com/hashicorp/serf),
+[Consul](https://github.com/hashicorp/consul),
+[Vault](https://github.com/hashicorp/vault),
+[Terraform](https://github.com/hashicorp/terraform), and
+[Nomad](https://github.com/hashicorp/nomad).
## Features
@@ -15,6 +18,9 @@ cli is the library that powers the CLI for
* Optional support for default subcommands so `cli` does something
other than error.
+* Support for shell autocompletion of subcommands, flags, and arguments
+ with callbacks in Go. You don't need to write any shell code.
+
* Automatic help generation for listing subcommands
* Automatic help flag recognition of `-h`, `--help`, etc.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/autocomplete.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/autocomplete.go
new file mode 100644
index 00000000..3bec6258
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/autocomplete.go
@@ -0,0 +1,43 @@
+package cli
+
+import (
+ "github.com/posener/complete/cmd/install"
+)
+
+// autocompleteInstaller is an interface to be implemented to perform the
+// autocomplete installation and uninstallation with a CLI.
+//
+// This interface is not exported because it only exists for unit tests
+// to be able to test that the installation is called properly.
+type autocompleteInstaller interface {
+ Install(string) error
+ Uninstall(string) error
+}
+
+// realAutocompleteInstaller uses the real install package to do the
+// install/uninstall.
+type realAutocompleteInstaller struct{}
+
+func (i *realAutocompleteInstaller) Install(cmd string) error {
+ return install.Install(cmd)
+}
+
+func (i *realAutocompleteInstaller) Uninstall(cmd string) error {
+ return install.Uninstall(cmd)
+}
+
+// mockAutocompleteInstaller is used for tests to record the install/uninstall.
+type mockAutocompleteInstaller struct {
+ InstallCalled bool
+ UninstallCalled bool
+}
+
+func (i *mockAutocompleteInstaller) Install(cmd string) error {
+ i.InstallCalled = true
+ return nil
+}
+
+func (i *mockAutocompleteInstaller) Uninstall(cmd string) error {
+ i.UninstallCalled = true
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go
index fbc0722f..b793b6f2 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/cli.go
@@ -11,6 +11,7 @@ import (
"text/template"
"github.com/armon/go-radix"
+ "github.com/posener/complete"
)
// CLI contains the state necessary to run subcommands and parse the
@@ -25,7 +26,7 @@ import (
//
// * We use longest prefix matching to find a matching subcommand. This
// means if you register "foo bar" and the user executes "cli foo qux",
-// the "foo" commmand will be executed with the arg "qux". It is up to
+// the "foo" command will be executed with the arg "qux". It is up to
// you to handle these args. One option is to just return the special
// help return code `RunResultHelp` to display help and exit.
//
@@ -66,6 +67,36 @@ type CLI struct {
// Version of the CLI.
Version string
+ // Autocomplete enables or disables subcommand auto-completion support.
+ // This is enabled by default when NewCLI is called. Otherwise, this
+ // must enabled explicitly.
+ //
+ // Autocomplete requires the "Name" option to be set on CLI. This name
+ // should be set exactly to the binary name that is autocompleted.
+ //
+ // Autocompletion is supported via the github.com/posener/complete
+ // library. This library supports both bash and zsh. To add support
+ // for other shells, please see that library.
+ //
+ // AutocompleteInstall and AutocompleteUninstall are the global flag
+ // names for installing and uninstalling the autocompletion handlers
+ // for the user's shell. The flag should omit the hyphen(s) in front of
+ // the value. Both single and double hyphens will automatically be supported
+ // for the flag name. These default to `autocomplete-install` and
+ // `autocomplete-uninstall` respectively.
+ //
+ // AutocompleteNoDefaultFlags is a boolean which controls if the default auto-
+ // complete flags like -help and -version are added to the output.
+ //
+ // AutocompleteGlobalFlags are a mapping of global flags for
+ // autocompletion. The help and version flags are automatically added.
+ Autocomplete bool
+ AutocompleteInstall string
+ AutocompleteUninstall string
+ AutocompleteNoDefaultFlags bool
+ AutocompleteGlobalFlags complete.Flags
+ autocompleteInstaller autocompleteInstaller // For tests
+
// HelpFunc and HelpWriter are used to output help information, if
// requested.
//
@@ -78,23 +109,32 @@ type CLI struct {
HelpFunc HelpFunc
HelpWriter io.Writer
+ //---------------------------------------------------------------
+ // Internal fields set automatically
+
once sync.Once
+ autocomplete *complete.Complete
commandTree *radix.Tree
commandNested bool
- isHelp bool
subcommand string
subcommandArgs []string
topFlags []string
- isVersion bool
+ // These are true when special global flags are set. We can/should
+ // probably use a bitset for this one day.
+ isHelp bool
+ isVersion bool
+ isAutocompleteInstall bool
+ isAutocompleteUninstall bool
}
// NewClI returns a new CLI instance with sensible defaults.
func NewCLI(app, version string) *CLI {
return &CLI{
- Name: app,
- Version: version,
- HelpFunc: BasicHelpFunc(app),
+ Name: app,
+ Version: version,
+ HelpFunc: BasicHelpFunc(app),
+ Autocomplete: true,
}
}
@@ -117,10 +157,58 @@ func (c *CLI) IsVersion() bool {
func (c *CLI) Run() (int, error) {
c.once.Do(c.init)
+ // If this is a autocompletion request, satisfy it. This must be called
+ // first before anything else since its possible to be autocompleting
+ // -help or -version or other flags and we want to show completions
+ // and not actually write the help or version.
+ if c.Autocomplete && c.autocomplete.Complete() {
+ return 0, nil
+ }
+
// Just show the version and exit if instructed.
if c.IsVersion() && c.Version != "" {
c.HelpWriter.Write([]byte(c.Version + "\n"))
- return 1, nil
+ return 0, nil
+ }
+
+ // Just print the help when only '-h' or '--help' is passed.
+ if c.IsHelp() && c.Subcommand() == "" {
+ c.HelpWriter.Write([]byte(c.HelpFunc(c.Commands) + "\n"))
+ return 0, nil
+ }
+
+ // If we're attempting to install or uninstall autocomplete then handle
+ if c.Autocomplete {
+ // Autocomplete requires the "Name" to be set so that we know what
+ // command to setup the autocomplete on.
+ if c.Name == "" {
+ return 1, fmt.Errorf(
+ "internal error: CLI.Name must be specified for autocomplete to work")
+ }
+
+ // If both install and uninstall flags are specified, then error
+ if c.isAutocompleteInstall && c.isAutocompleteUninstall {
+ return 1, fmt.Errorf(
+ "Either the autocomplete install or uninstall flag may " +
+ "be specified, but not both.")
+ }
+
+ // If the install flag is specified, perform the install or uninstall
+ if c.isAutocompleteInstall {
+ if err := c.autocompleteInstaller.Install(c.Name); err != nil {
+ return 1, err
+ }
+
+ return 0, nil
+ }
+
+ if c.isAutocompleteUninstall {
+ if err := c.autocompleteInstaller.Uninstall(c.Name); err != nil {
+ return 1, err
+ }
+
+ return 0, nil
+ }
}
// Attempt to get the factory function for creating the command
@@ -133,13 +221,13 @@ func (c *CLI) Run() (int, error) {
command, err := raw.(CommandFactory)()
if err != nil {
- return 0, err
+ return 1, err
}
// If we've been instructed to just print the help, then print it
if c.IsHelp() {
c.commandHelp(command)
- return 1, nil
+ return 0, nil
}
// If there is an invalid flag, then error
@@ -250,7 +338,7 @@ func (c *CLI) init() {
c.commandTree.Walk(walkFn)
// Insert any that we're missing
- for k, _ := range toInsert {
+ for k := range toInsert {
var f CommandFactory = func() (Command, error) {
return &MockCommand{
HelpText: "This command is accessed by using one of the subcommands below.",
@@ -262,10 +350,113 @@ func (c *CLI) init() {
}
}
+ // Setup autocomplete if we have it enabled. We have to do this after
+ // the command tree is setup so we can use the radix tree to easily find
+ // all subcommands.
+ if c.Autocomplete {
+ c.initAutocomplete()
+ }
+
// Process the args
c.processArgs()
}
+func (c *CLI) initAutocomplete() {
+ if c.AutocompleteInstall == "" {
+ c.AutocompleteInstall = defaultAutocompleteInstall
+ }
+
+ if c.AutocompleteUninstall == "" {
+ c.AutocompleteUninstall = defaultAutocompleteUninstall
+ }
+
+ if c.autocompleteInstaller == nil {
+ c.autocompleteInstaller = &realAutocompleteInstaller{}
+ }
+
+ // Build the root command
+ cmd := c.initAutocompleteSub("")
+
+ // For the root, we add the global flags to the "Flags". This way
+ // they don't show up on every command.
+ if !c.AutocompleteNoDefaultFlags {
+ cmd.Flags = map[string]complete.Predictor{
+ "-" + c.AutocompleteInstall: complete.PredictNothing,
+ "-" + c.AutocompleteUninstall: complete.PredictNothing,
+ "-help": complete.PredictNothing,
+ "-version": complete.PredictNothing,
+ }
+ }
+ cmd.GlobalFlags = c.AutocompleteGlobalFlags
+
+ c.autocomplete = complete.New(c.Name, cmd)
+}
+
+// initAutocompleteSub creates the complete.Command for a subcommand with
+// the given prefix. This will continue recursively for all subcommands.
+// The prefix "" (empty string) can be used for the root command.
+func (c *CLI) initAutocompleteSub(prefix string) complete.Command {
+ var cmd complete.Command
+ walkFn := func(k string, raw interface{}) bool {
+ if len(prefix) > 0 {
+ // If we have a prefix, trim the prefix + 1 (for the space)
+ // Example: turns "sub one" to "one" with prefix "sub"
+ k = k[len(prefix)+1:]
+ }
+
+ // Keep track of the full key so that we can nest further if necessary
+ fullKey := k
+
+ if idx := strings.LastIndex(k, " "); idx >= 0 {
+ // If there is a space, we trim up to the space
+ k = k[:idx]
+ }
+
+ if idx := strings.LastIndex(k, " "); idx >= 0 {
+ // This catches the scenario just in case where we see "sub one"
+ // before "sub". This will let us properly setup the subcommand
+ // regardless.
+ k = k[idx+1:]
+ }
+
+ if _, ok := cmd.Sub[k]; ok {
+ // If we already tracked this subcommand then ignore
+ return false
+ }
+
+ if cmd.Sub == nil {
+ cmd.Sub = complete.Commands(make(map[string]complete.Command))
+ }
+ subCmd := c.initAutocompleteSub(fullKey)
+
+ // Instantiate the command so that we can check if the command is
+ // a CommandAutocomplete implementation. If there is an error
+ // creating the command, we just ignore it since that will be caught
+ // later.
+ impl, err := raw.(CommandFactory)()
+ if err != nil {
+ impl = nil
+ }
+
+ // Check if it implements ComandAutocomplete. If so, setup the autocomplete
+ if c, ok := impl.(CommandAutocomplete); ok {
+ subCmd.Args = c.AutocompleteArgs()
+ subCmd.Flags = c.AutocompleteFlags()
+ }
+
+ cmd.Sub[k] = subCmd
+ return false
+ }
+
+ walkPrefix := prefix
+ if walkPrefix != "" {
+ walkPrefix += " "
+ }
+
+ c.commandTree.WalkPrefix(walkPrefix, walkFn)
+ return cmd
+}
+
func (c *CLI) commandHelp(command Command) {
// Get the template to use
tpl := strings.TrimSpace(defaultHelpTemplate)
@@ -388,16 +579,35 @@ func (c *CLI) helpCommands(prefix string) map[string]CommandFactory {
func (c *CLI) processArgs() {
for i, arg := range c.Args {
+ if arg == "--" {
+ break
+ }
+
+ // Check for help flags.
+ if arg == "-h" || arg == "-help" || arg == "--help" {
+ c.isHelp = true
+ continue
+ }
+
+ // Check for autocomplete flags
+ if c.Autocomplete {
+ if arg == "-"+c.AutocompleteInstall || arg == "--"+c.AutocompleteInstall {
+ c.isAutocompleteInstall = true
+ continue
+ }
+
+ if arg == "-"+c.AutocompleteUninstall || arg == "--"+c.AutocompleteUninstall {
+ c.isAutocompleteUninstall = true
+ continue
+ }
+ }
+
if c.subcommand == "" {
- // Check for version and help flags if not in a subcommand
+ // Check for version flags if not in a subcommand.
if arg == "-v" || arg == "-version" || arg == "--version" {
c.isVersion = true
continue
}
- if arg == "-h" || arg == "-help" || arg == "--help" {
- c.isHelp = true
- continue
- }
if arg != "" && arg[0] == '-' {
// Record the arg...
@@ -444,11 +654,16 @@ func (c *CLI) processArgs() {
}
}
+// defaultAutocompleteInstall and defaultAutocompleteUninstall are the
+// default values for the autocomplete install and uninstall flags.
+const defaultAutocompleteInstall = "autocomplete-install"
+const defaultAutocompleteUninstall = "autocomplete-uninstall"
+
const defaultHelpTemplate = `
{{.Help}}{{if gt (len .Subcommands) 0}}
Subcommands:
-{{ range $value := .Subcommands }}
+{{- range $value := .Subcommands }}
{{ $value.NameAligned }} {{ $value.Synopsis }}{{ end }}
-{{ end }}
+{{- end }}
`
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command.go
index b4924eb0..bed11faf 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command.go
@@ -1,5 +1,9 @@
package cli
+import (
+ "github.com/posener/complete"
+)
+
const (
// RunResultHelp is a value that can be returned from Run to signal
// to the CLI to render the help output.
@@ -26,6 +30,22 @@ type Command interface {
Synopsis() string
}
+// CommandAutocomplete is an extension of Command that enables fine-grained
+// autocompletion. Subcommand autocompletion will work even if this interface
+// is not implemented. By implementing this interface, more advanced
+// autocompletion is enabled.
+type CommandAutocomplete interface {
+ // AutocompleteArgs returns the argument predictor for this command.
+ // If argument completion is not supported, this should return
+ // complete.PredictNothing.
+ AutocompleteArgs() complete.Predictor
+
+ // AutocompleteFlags returns a mapping of supported flags and autocomplete
+ // options for this command. The map key for the Flags map should be the
+ // complete flag such as "-foo" or "--foo".
+ AutocompleteFlags() complete.Flags
+}
+
// CommandHelpTemplate is an extension of Command that also has a function
// for returning a template for the help rather than the help itself. In
// this scenario, both Help and HelpTemplate should be implemented.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command_mock.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command_mock.go
index 6371e573..7a584b7e 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command_mock.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/command_mock.go
@@ -1,5 +1,9 @@
package cli
+import (
+ "github.com/posener/complete"
+)
+
// MockCommand is an implementation of Command that can be used for tests.
// It is publicly exported from this package in case you want to use it
// externally.
@@ -29,6 +33,23 @@ func (c *MockCommand) Synopsis() string {
return c.SynopsisText
}
+// MockCommandAutocomplete is an implementation of CommandAutocomplete.
+type MockCommandAutocomplete struct {
+ MockCommand
+
+ // Settable
+ AutocompleteArgsValue complete.Predictor
+ AutocompleteFlagsValue complete.Flags
+}
+
+func (c *MockCommandAutocomplete) AutocompleteArgs() complete.Predictor {
+ return c.AutocompleteArgsValue
+}
+
+func (c *MockCommandAutocomplete) AutocompleteFlags() complete.Flags {
+ return c.AutocompleteFlagsValue
+}
+
// MockCommandHelpTemplate is an implementation of CommandHelpTemplate.
type MockCommandHelpTemplate struct {
MockCommand
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/help.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/help.go
index 67ea8c82..f5ca58f5 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/help.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/help.go
@@ -18,7 +18,7 @@ func BasicHelpFunc(app string) HelpFunc {
return func(commands map[string]CommandFactory) string {
var buf bytes.Buffer
buf.WriteString(fmt.Sprintf(
- "usage: %s [--version] [--help] <command> [<args>]\n\n",
+ "Usage: %s [--version] [--help] <command> [<args>]\n\n",
app))
buf.WriteString("Available commands are:\n")
@@ -26,7 +26,7 @@ func BasicHelpFunc(app string) HelpFunc {
// key length so they can be aligned properly.
keys := make([]string, 0, len(commands))
maxKeyLen := 0
- for key, _ := range commands {
+ for key := range commands {
if len(key) > maxKeyLen {
maxKeyLen = len(key)
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/ui_mock.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/ui_mock.go
index c4677285..0bfe0a19 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/ui_mock.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/cli/ui_mock.go
@@ -7,12 +7,25 @@ import (
"sync"
)
-// MockUi is a mock UI that is used for tests and is exported publicly for
-// use in external tests if needed as well.
+// NewMockUi returns a fully initialized MockUi instance
+// which is safe for concurrent use.
+func NewMockUi() *MockUi {
+ m := new(MockUi)
+ m.once.Do(m.init)
+ return m
+}
+
+// MockUi is a mock UI that is used for tests and is exported publicly
+// for use in external tests if needed as well. Do not instantite this
+// directly since the buffers will be initialized on the first write. If
+// there is no write then you will get a nil panic. Please use the
+// NewMockUi() constructor function instead. You can fix your code with
+//
+// sed -i -e 's/new(cli.MockUi)/cli.NewMockUi()/g' *_test.go
type MockUi struct {
InputReader io.Reader
- ErrorWriter *bytes.Buffer
- OutputWriter *bytes.Buffer
+ ErrorWriter *syncBuffer
+ OutputWriter *syncBuffer
once sync.Once
}
@@ -59,6 +72,40 @@ func (u *MockUi) Warn(message string) {
}
func (u *MockUi) init() {
- u.ErrorWriter = new(bytes.Buffer)
- u.OutputWriter = new(bytes.Buffer)
+ u.ErrorWriter = new(syncBuffer)
+ u.OutputWriter = new(syncBuffer)
+}
+
+type syncBuffer struct {
+ sync.RWMutex
+ b bytes.Buffer
+}
+
+func (b *syncBuffer) Write(data []byte) (int, error) {
+ b.Lock()
+ defer b.Unlock()
+ return b.b.Write(data)
+}
+
+func (b *syncBuffer) Read(data []byte) (int, error) {
+ b.RLock()
+ defer b.RUnlock()
+ return b.b.Read(data)
+}
+
+func (b *syncBuffer) Reset() {
+ b.Lock()
+ b.b.Reset()
+ b.Unlock()
+}
+
+func (b *syncBuffer) String() string {
+ return string(b.Bytes())
+}
+
+func (b *syncBuffer) Bytes() []byte {
+ b.RLock()
+ data := b.b.Bytes()
+ b.RUnlock()
+ return data
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/LICENSE
new file mode 100644
index 00000000..a3866a29
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2016 Mitchell Hashimoto
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/README.md
new file mode 100644
index 00000000..26781bba
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/README.md
@@ -0,0 +1,52 @@
+# go-testing-interface
+
+go-testing-interface is a Go library that exports an interface that
+`*testing.T` implements as well as a runtime version you can use in its
+place.
+
+The purpose of this library is so that you can export test helpers as a
+public API without depending on the "testing" package, since you can't
+create a `*testing.T` struct manually. This lets you, for example, use the
+public testing APIs to generate mock data at runtime, rather than just at
+test time.
+
+## Usage & Example
+
+For usage and examples see the [Godoc](http://godoc.org/github.com/mitchellh/go-testing-interface).
+
+Given a test helper written using `go-testing-interface` like this:
+
+ import "github.com/mitchellh/go-testing-interface"
+
+ func TestHelper(t testing.T) {
+ t.Fatal("I failed")
+ }
+
+You can call the test helper in a real test easily:
+
+ import "testing"
+
+ func TestThing(t *testing.T) {
+ TestHelper(t)
+ }
+
+You can also call the test helper at runtime if needed:
+
+ import "github.com/mitchellh/go-testing-interface"
+
+ func main() {
+ TestHelper(&testing.RuntimeT{})
+ }
+
+## Why?!
+
+**Why would I call a test helper that takes a *testing.T at runtime?**
+
+You probably shouldn't. The only use case I've seen (and I've had) for this
+is to implement a "dev mode" for a service where the test helpers are used
+to populate mock data, create a mock DB, perhaps run service dependencies
+in-memory, etc.
+
+Outside of a "dev mode", I've never seen a use case for this and I think
+there shouldn't be one since the point of the `testing.T` interface is that
+you can fail immediately.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing.go
new file mode 100644
index 00000000..204afb42
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing.go
@@ -0,0 +1,84 @@
+// +build !go1.9
+
+package testing
+
+import (
+ "fmt"
+ "log"
+)
+
+// T is the interface that mimics the standard library *testing.T.
+//
+// In unit tests you can just pass a *testing.T struct. At runtime, outside
+// of tests, you can pass in a RuntimeT struct from this package.
+type T interface {
+ Error(args ...interface{})
+ Errorf(format string, args ...interface{})
+ Fail()
+ FailNow()
+ Failed() bool
+ Fatal(args ...interface{})
+ Fatalf(format string, args ...interface{})
+ Log(args ...interface{})
+ Logf(format string, args ...interface{})
+ Name() string
+ Skip(args ...interface{})
+ SkipNow()
+ Skipf(format string, args ...interface{})
+ Skipped() bool
+}
+
+// RuntimeT implements T and can be instantiated and run at runtime to
+// mimic *testing.T behavior. Unlike *testing.T, this will simply panic
+// for calls to Fatal. For calls to Error, you'll have to check the errors
+// list to determine whether to exit yourself. Name and Skip methods are
+// unimplemented noops.
+type RuntimeT struct {
+ failed bool
+}
+
+func (t *RuntimeT) Error(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+ t.Fail()
+}
+
+func (t *RuntimeT) Errorf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+ t.Fail()
+}
+
+func (t *RuntimeT) Fatal(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+ t.FailNow()
+}
+
+func (t *RuntimeT) Fatalf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+ t.FailNow()
+}
+
+func (t *RuntimeT) Fail() {
+ t.failed = true
+}
+
+func (t *RuntimeT) FailNow() {
+ panic("testing.T failed, see logs for output (if any)")
+}
+
+func (t *RuntimeT) Failed() bool {
+ return t.failed
+}
+
+func (t *RuntimeT) Log(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+}
+
+func (t *RuntimeT) Logf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+}
+
+func (t *RuntimeT) Name() string { return "" }
+func (t *RuntimeT) Skip(args ...interface{}) {}
+func (t *RuntimeT) SkipNow() {}
+func (t *RuntimeT) Skipf(format string, args ...interface{}) {}
+func (t *RuntimeT) Skipped() bool { return false }
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing_go19.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing_go19.go
new file mode 100644
index 00000000..07fbcb58
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/mitchellh/go-testing-interface/testing_go19.go
@@ -0,0 +1,80 @@
+// +build go1.9
+
+// NOTE: This is a temporary copy of testing.go for Go 1.9 with the addition
+// of "Helper" to the T interface. Go 1.9 at the time of typing is in RC
+// and is set for release shortly. We'll support this on master as the default
+// as soon as 1.9 is released.
+
+package testing
+
+import (
+ "fmt"
+ "log"
+)
+
+// T is the interface that mimics the standard library *testing.T.
+//
+// In unit tests you can just pass a *testing.T struct. At runtime, outside
+// of tests, you can pass in a RuntimeT struct from this package.
+type T interface {
+ Error(args ...interface{})
+ Errorf(format string, args ...interface{})
+ Fatal(args ...interface{})
+ Fatalf(format string, args ...interface{})
+ Fail()
+ FailNow()
+ Failed() bool
+ Helper()
+ Log(args ...interface{})
+ Logf(format string, args ...interface{})
+}
+
+// RuntimeT implements T and can be instantiated and run at runtime to
+// mimic *testing.T behavior. Unlike *testing.T, this will simply panic
+// for calls to Fatal. For calls to Error, you'll have to check the errors
+// list to determine whether to exit yourself.
+type RuntimeT struct {
+ failed bool
+}
+
+func (t *RuntimeT) Error(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+ t.Fail()
+}
+
+func (t *RuntimeT) Errorf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+ t.Fail()
+}
+
+func (t *RuntimeT) Fatal(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+ t.FailNow()
+}
+
+func (t *RuntimeT) Fatalf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+ t.FailNow()
+}
+
+func (t *RuntimeT) Fail() {
+ t.failed = true
+}
+
+func (t *RuntimeT) FailNow() {
+ panic("testing.T failed, see logs for output (if any)")
+}
+
+func (t *RuntimeT) Failed() bool {
+ return t.failed
+}
+
+func (t *RuntimeT) Helper() {}
+
+func (t *RuntimeT) Log(args ...interface{}) {
+ log.Println(fmt.Sprintln(args...))
+}
+
+func (t *RuntimeT) Logf(format string, args ...interface{}) {
+ log.Println(fmt.Sprintf(format, args...))
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/LICENSE
new file mode 100644
index 00000000..835ba3e7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/LICENSE
@@ -0,0 +1,23 @@
+Copyright (c) 2015, Dave Cheney <dave@cheney.net>
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/README.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/README.md
new file mode 100644
index 00000000..273db3c9
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/README.md
@@ -0,0 +1,52 @@
+# errors [![Travis-CI](https://travis-ci.org/pkg/errors.svg)](https://travis-ci.org/pkg/errors) [![AppVeyor](https://ci.appveyor.com/api/projects/status/b98mptawhudj53ep/branch/master?svg=true)](https://ci.appveyor.com/project/davecheney/errors/branch/master) [![GoDoc](https://godoc.org/github.com/pkg/errors?status.svg)](http://godoc.org/github.com/pkg/errors) [![Report card](https://goreportcard.com/badge/github.com/pkg/errors)](https://goreportcard.com/report/github.com/pkg/errors)
+
+Package errors provides simple error handling primitives.
+
+`go get github.com/pkg/errors`
+
+The traditional error handling idiom in Go is roughly akin to
+```go
+if err != nil {
+ return err
+}
+```
+which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error.
+
+## Adding context to an error
+
+The errors.Wrap function returns a new error that adds context to the original error. For example
+```go
+_, err := ioutil.ReadAll(r)
+if err != nil {
+ return errors.Wrap(err, "read failed")
+}
+```
+## Retrieving the cause of an error
+
+Using `errors.Wrap` constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by `errors.Cause`.
+```go
+type causer interface {
+ Cause() error
+}
+```
+`errors.Cause` will recursively retrieve the topmost error which does not implement `causer`, which is assumed to be the original cause. For example:
+```go
+switch err := errors.Cause(err).(type) {
+case *MyError:
+ // handle specifically
+default:
+ // unknown error
+}
+```
+
+[Read the package documentation for more information](https://godoc.org/github.com/pkg/errors).
+
+## Contributing
+
+We welcome pull requests, bug fixes and issue reports. With that said, the bar for adding new symbols to this package is intentionally set high.
+
+Before proposing a change, please discuss your change by raising an issue.
+
+## Licence
+
+BSD-2-Clause
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/appveyor.yml b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/appveyor.yml
new file mode 100644
index 00000000..a932eade
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/appveyor.yml
@@ -0,0 +1,32 @@
+version: build-{build}.{branch}
+
+clone_folder: C:\gopath\src\github.com\pkg\errors
+shallow_clone: true # for startup speed
+
+environment:
+ GOPATH: C:\gopath
+
+platform:
+ - x64
+
+# http://www.appveyor.com/docs/installed-software
+install:
+ # some helpful output for debugging builds
+ - go version
+ - go env
+ # pre-installed MinGW at C:\MinGW is 32bit only
+ # but MSYS2 at C:\msys64 has mingw64
+ - set PATH=C:\msys64\mingw64\bin;%PATH%
+ - gcc --version
+ - g++ --version
+
+build_script:
+ - go install -v ./...
+
+test_script:
+ - set PATH=C:\gopath\bin;%PATH%
+ - go test -v ./...
+
+#artifacts:
+# - path: '%GOPATH%\bin\*.exe'
+deploy: off
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/errors.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/errors.go
new file mode 100644
index 00000000..842ee804
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/errors.go
@@ -0,0 +1,269 @@
+// Package errors provides simple error handling primitives.
+//
+// The traditional error handling idiom in Go is roughly akin to
+//
+// if err != nil {
+// return err
+// }
+//
+// which applied recursively up the call stack results in error reports
+// without context or debugging information. The errors package allows
+// programmers to add context to the failure path in their code in a way
+// that does not destroy the original value of the error.
+//
+// Adding context to an error
+//
+// The errors.Wrap function returns a new error that adds context to the
+// original error by recording a stack trace at the point Wrap is called,
+// and the supplied message. For example
+//
+// _, err := ioutil.ReadAll(r)
+// if err != nil {
+// return errors.Wrap(err, "read failed")
+// }
+//
+// If additional control is required the errors.WithStack and errors.WithMessage
+// functions destructure errors.Wrap into its component operations of annotating
+// an error with a stack trace and an a message, respectively.
+//
+// Retrieving the cause of an error
+//
+// Using errors.Wrap constructs a stack of errors, adding context to the
+// preceding error. Depending on the nature of the error it may be necessary
+// to reverse the operation of errors.Wrap to retrieve the original error
+// for inspection. Any error value which implements this interface
+//
+// type causer interface {
+// Cause() error
+// }
+//
+// can be inspected by errors.Cause. errors.Cause will recursively retrieve
+// the topmost error which does not implement causer, which is assumed to be
+// the original cause. For example:
+//
+// switch err := errors.Cause(err).(type) {
+// case *MyError:
+// // handle specifically
+// default:
+// // unknown error
+// }
+//
+// causer interface is not exported by this package, but is considered a part
+// of stable public API.
+//
+// Formatted printing of errors
+//
+// All error values returned from this package implement fmt.Formatter and can
+// be formatted by the fmt package. The following verbs are supported
+//
+// %s print the error. If the error has a Cause it will be
+// printed recursively
+// %v see %s
+// %+v extended format. Each Frame of the error's StackTrace will
+// be printed in detail.
+//
+// Retrieving the stack trace of an error or wrapper
+//
+// New, Errorf, Wrap, and Wrapf record a stack trace at the point they are
+// invoked. This information can be retrieved with the following interface.
+//
+// type stackTracer interface {
+// StackTrace() errors.StackTrace
+// }
+//
+// Where errors.StackTrace is defined as
+//
+// type StackTrace []Frame
+//
+// The Frame type represents a call site in the stack trace. Frame supports
+// the fmt.Formatter interface that can be used for printing information about
+// the stack trace of this error. For example:
+//
+// if err, ok := err.(stackTracer); ok {
+// for _, f := range err.StackTrace() {
+// fmt.Printf("%+s:%d", f)
+// }
+// }
+//
+// stackTracer interface is not exported by this package, but is considered a part
+// of stable public API.
+//
+// See the documentation for Frame.Format for more details.
+package errors
+
+import (
+ "fmt"
+ "io"
+)
+
+// New returns an error with the supplied message.
+// New also records the stack trace at the point it was called.
+func New(message string) error {
+ return &fundamental{
+ msg: message,
+ stack: callers(),
+ }
+}
+
+// Errorf formats according to a format specifier and returns the string
+// as a value that satisfies error.
+// Errorf also records the stack trace at the point it was called.
+func Errorf(format string, args ...interface{}) error {
+ return &fundamental{
+ msg: fmt.Sprintf(format, args...),
+ stack: callers(),
+ }
+}
+
+// fundamental is an error that has a message and a stack, but no caller.
+type fundamental struct {
+ msg string
+ *stack
+}
+
+func (f *fundamental) Error() string { return f.msg }
+
+func (f *fundamental) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ if s.Flag('+') {
+ io.WriteString(s, f.msg)
+ f.stack.Format(s, verb)
+ return
+ }
+ fallthrough
+ case 's':
+ io.WriteString(s, f.msg)
+ case 'q':
+ fmt.Fprintf(s, "%q", f.msg)
+ }
+}
+
+// WithStack annotates err with a stack trace at the point WithStack was called.
+// If err is nil, WithStack returns nil.
+func WithStack(err error) error {
+ if err == nil {
+ return nil
+ }
+ return &withStack{
+ err,
+ callers(),
+ }
+}
+
+type withStack struct {
+ error
+ *stack
+}
+
+func (w *withStack) Cause() error { return w.error }
+
+func (w *withStack) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ if s.Flag('+') {
+ fmt.Fprintf(s, "%+v", w.Cause())
+ w.stack.Format(s, verb)
+ return
+ }
+ fallthrough
+ case 's':
+ io.WriteString(s, w.Error())
+ case 'q':
+ fmt.Fprintf(s, "%q", w.Error())
+ }
+}
+
+// Wrap returns an error annotating err with a stack trace
+// at the point Wrap is called, and the supplied message.
+// If err is nil, Wrap returns nil.
+func Wrap(err error, message string) error {
+ if err == nil {
+ return nil
+ }
+ err = &withMessage{
+ cause: err,
+ msg: message,
+ }
+ return &withStack{
+ err,
+ callers(),
+ }
+}
+
+// Wrapf returns an error annotating err with a stack trace
+// at the point Wrapf is call, and the format specifier.
+// If err is nil, Wrapf returns nil.
+func Wrapf(err error, format string, args ...interface{}) error {
+ if err == nil {
+ return nil
+ }
+ err = &withMessage{
+ cause: err,
+ msg: fmt.Sprintf(format, args...),
+ }
+ return &withStack{
+ err,
+ callers(),
+ }
+}
+
+// WithMessage annotates err with a new message.
+// If err is nil, WithMessage returns nil.
+func WithMessage(err error, message string) error {
+ if err == nil {
+ return nil
+ }
+ return &withMessage{
+ cause: err,
+ msg: message,
+ }
+}
+
+type withMessage struct {
+ cause error
+ msg string
+}
+
+func (w *withMessage) Error() string { return w.msg + ": " + w.cause.Error() }
+func (w *withMessage) Cause() error { return w.cause }
+
+func (w *withMessage) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ if s.Flag('+') {
+ fmt.Fprintf(s, "%+v\n", w.Cause())
+ io.WriteString(s, w.msg)
+ return
+ }
+ fallthrough
+ case 's', 'q':
+ io.WriteString(s, w.Error())
+ }
+}
+
+// Cause returns the underlying cause of the error, if possible.
+// An error value has a cause if it implements the following
+// interface:
+//
+// type causer interface {
+// Cause() error
+// }
+//
+// If the error does not implement Cause, the original error will
+// be returned. If the error is nil, nil will be returned without further
+// investigation.
+func Cause(err error) error {
+ type causer interface {
+ Cause() error
+ }
+
+ for err != nil {
+ cause, ok := err.(causer)
+ if !ok {
+ break
+ }
+ err = cause.Cause()
+ }
+ return err
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/stack.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/stack.go
new file mode 100644
index 00000000..cbe3f3e3
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/pkg/errors/stack.go
@@ -0,0 +1,186 @@
+package errors
+
+import (
+ "fmt"
+ "io"
+ "path"
+ "runtime"
+ "strings"
+)
+
+// Frame represents a program counter inside a stack frame.
+type Frame uintptr
+
+// pc returns the program counter for this frame;
+// multiple frames may have the same PC value.
+func (f Frame) pc() uintptr { return uintptr(f) - 1 }
+
+// file returns the full path to the file that contains the
+// function for this Frame's pc.
+func (f Frame) file() string {
+ fn := runtime.FuncForPC(f.pc())
+ if fn == nil {
+ return "unknown"
+ }
+ file, _ := fn.FileLine(f.pc())
+ return file
+}
+
+// line returns the line number of source code of the
+// function for this Frame's pc.
+func (f Frame) line() int {
+ fn := runtime.FuncForPC(f.pc())
+ if fn == nil {
+ return 0
+ }
+ _, line := fn.FileLine(f.pc())
+ return line
+}
+
+// Format formats the frame according to the fmt.Formatter interface.
+//
+// %s source file
+// %d source line
+// %n function name
+// %v equivalent to %s:%d
+//
+// Format accepts flags that alter the printing of some verbs, as follows:
+//
+// %+s path of source file relative to the compile time GOPATH
+// %+v equivalent to %+s:%d
+func (f Frame) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 's':
+ switch {
+ case s.Flag('+'):
+ pc := f.pc()
+ fn := runtime.FuncForPC(pc)
+ if fn == nil {
+ io.WriteString(s, "unknown")
+ } else {
+ file, _ := fn.FileLine(pc)
+ fmt.Fprintf(s, "%s\n\t%s", fn.Name(), file)
+ }
+ default:
+ io.WriteString(s, path.Base(f.file()))
+ }
+ case 'd':
+ fmt.Fprintf(s, "%d", f.line())
+ case 'n':
+ name := runtime.FuncForPC(f.pc()).Name()
+ io.WriteString(s, funcname(name))
+ case 'v':
+ f.Format(s, 's')
+ io.WriteString(s, ":")
+ f.Format(s, 'd')
+ }
+}
+
+// StackTrace is stack of Frames from innermost (newest) to outermost (oldest).
+type StackTrace []Frame
+
+// Format formats the stack of Frames according to the fmt.Formatter interface.
+//
+// %s lists source files for each Frame in the stack
+// %v lists the source file and line number for each Frame in the stack
+//
+// Format accepts flags that alter the printing of some verbs, as follows:
+//
+// %+v Prints filename, function, and line number for each Frame in the stack.
+func (st StackTrace) Format(s fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ switch {
+ case s.Flag('+'):
+ for _, f := range st {
+ fmt.Fprintf(s, "\n%+v", f)
+ }
+ case s.Flag('#'):
+ fmt.Fprintf(s, "%#v", []Frame(st))
+ default:
+ fmt.Fprintf(s, "%v", []Frame(st))
+ }
+ case 's':
+ fmt.Fprintf(s, "%s", []Frame(st))
+ }
+}
+
+// stack represents a stack of program counters.
+type stack []uintptr
+
+func (s *stack) Format(st fmt.State, verb rune) {
+ switch verb {
+ case 'v':
+ switch {
+ case st.Flag('+'):
+ for _, pc := range *s {
+ f := Frame(pc)
+ fmt.Fprintf(st, "\n%+v", f)
+ }
+ }
+ }
+}
+
+func (s *stack) StackTrace() StackTrace {
+ f := make([]Frame, len(*s))
+ for i := 0; i < len(f); i++ {
+ f[i] = Frame((*s)[i])
+ }
+ return f
+}
+
+func callers() *stack {
+ const depth = 32
+ var pcs [depth]uintptr
+ n := runtime.Callers(3, pcs[:])
+ var st stack = pcs[0:n]
+ return &st
+}
+
+// funcname removes the path prefix component of a function's name reported by func.Name().
+func funcname(name string) string {
+ i := strings.LastIndex(name, "/")
+ name = name[i+1:]
+ i = strings.Index(name, ".")
+ return name[i+1:]
+}
+
+func trimGOPATH(name, file string) string {
+ // Here we want to get the source file path relative to the compile time
+ // GOPATH. As of Go 1.6.x there is no direct way to know the compiled
+ // GOPATH at runtime, but we can infer the number of path segments in the
+ // GOPATH. We note that fn.Name() returns the function name qualified by
+ // the import path, which does not include the GOPATH. Thus we can trim
+ // segments from the beginning of the file path until the number of path
+ // separators remaining is one more than the number of path separators in
+ // the function name. For example, given:
+ //
+ // GOPATH /home/user
+ // file /home/user/src/pkg/sub/file.go
+ // fn.Name() pkg/sub.Type.Method
+ //
+ // We want to produce:
+ //
+ // pkg/sub/file.go
+ //
+ // From this we can easily see that fn.Name() has one less path separator
+ // than our desired output. We count separators from the end of the file
+ // path until it finds two more than in the function name and then move
+ // one character forward to preserve the initial path segment without a
+ // leading separator.
+ const sep = "/"
+ goal := strings.Count(name, sep) + 2
+ i := len(file)
+ for n := 0; n < goal; n++ {
+ i = strings.LastIndex(file[:i], sep)
+ if i == -1 {
+ // not enough separators found, set i so that the slice expression
+ // below leaves file unmodified
+ i = -len(sep)
+ break
+ }
+ }
+ // get back to 0 or trim the leading separator
+ file = file[i+len(sep):]
+ return file
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/LICENSE.txt b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/LICENSE.txt
new file mode 100644
index 00000000..16249b4a
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/LICENSE.txt
@@ -0,0 +1,21 @@
+The MIT License
+
+Copyright (c) 2017 Eyal Posener
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE. \ No newline at end of file
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/args.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/args.go
new file mode 100644
index 00000000..73c356d7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/args.go
@@ -0,0 +1,75 @@
+package complete
+
+import (
+ "os"
+ "path/filepath"
+)
+
+// Args describes command line arguments
+type Args struct {
+ // All lists of all arguments in command line (not including the command itself)
+ All []string
+ // Completed lists of all completed arguments in command line,
+ // If the last one is still being typed - no space after it,
+ // it won't appear in this list of arguments.
+ Completed []string
+ // Last argument in command line, the one being typed, if the last
+ // character in the command line is a space, this argument will be empty,
+ // otherwise this would be the last word.
+ Last string
+ // LastCompleted is the last argument that was fully typed.
+ // If the last character in the command line is space, this would be the
+ // last word, otherwise, it would be the word before that.
+ LastCompleted string
+}
+
+// Directory gives the directory of the current written
+// last argument if it represents a file name being written.
+// in case that it is not, we fall back to the current directory.
+func (a Args) Directory() string {
+ if info, err := os.Stat(a.Last); err == nil && info.IsDir() {
+ return fixPathForm(a.Last, a.Last)
+ }
+ dir := filepath.Dir(a.Last)
+ if info, err := os.Stat(dir); err != nil || !info.IsDir() {
+ return "./"
+ }
+ return fixPathForm(a.Last, dir)
+}
+
+func newArgs(line []string) Args {
+ completed := removeLast(line[1:])
+ return Args{
+ All: line[1:],
+ Completed: completed,
+ Last: last(line),
+ LastCompleted: last(completed),
+ }
+}
+
+func (a Args) from(i int) Args {
+ if i > len(a.All) {
+ i = len(a.All)
+ }
+ a.All = a.All[i:]
+
+ if i > len(a.Completed) {
+ i = len(a.Completed)
+ }
+ a.Completed = a.Completed[i:]
+ return a
+}
+
+func removeLast(a []string) []string {
+ if len(a) > 0 {
+ return a[:len(a)-1]
+ }
+ return a
+}
+
+func last(args []string) (last string) {
+ if len(args) > 0 {
+ last = args[len(args)-1]
+ }
+ return
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/cmd.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/cmd.go
new file mode 100644
index 00000000..7137dee1
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/cmd.go
@@ -0,0 +1,128 @@
+// Package cmd used for command line options for the complete tool
+package cmd
+
+import (
+ "errors"
+ "flag"
+ "fmt"
+ "os"
+ "strings"
+
+ "github.com/posener/complete/cmd/install"
+)
+
+// CLI for command line
+type CLI struct {
+ Name string
+ InstallName string
+ UninstallName string
+
+ install bool
+ uninstall bool
+ yes bool
+}
+
+const (
+ defaultInstallName = "install"
+ defaultUninstallName = "uninstall"
+)
+
+// Run is used when running complete in command line mode.
+// this is used when the complete is not completing words, but to
+// install it or uninstall it.
+func (f *CLI) Run() bool {
+ err := f.validate()
+ if err != nil {
+ os.Stderr.WriteString(err.Error() + "\n")
+ os.Exit(1)
+ }
+
+ switch {
+ case f.install:
+ f.prompt()
+ err = install.Install(f.Name)
+ case f.uninstall:
+ f.prompt()
+ err = install.Uninstall(f.Name)
+ default:
+ // non of the action flags matched,
+ // returning false should make the real program execute
+ return false
+ }
+
+ if err != nil {
+ fmt.Printf("%s failed! %s\n", f.action(), err)
+ os.Exit(3)
+ }
+ fmt.Println("Done!")
+ return true
+}
+
+// prompt use for approval
+// exit if approval was not given
+func (f *CLI) prompt() {
+ defer fmt.Println(f.action() + "ing...")
+ if f.yes {
+ return
+ }
+ fmt.Printf("%s completion for %s? ", f.action(), f.Name)
+ var answer string
+ fmt.Scanln(&answer)
+
+ switch strings.ToLower(answer) {
+ case "y", "yes":
+ return
+ default:
+ fmt.Println("Cancelling...")
+ os.Exit(1)
+ }
+}
+
+// AddFlags adds the CLI flags to the flag set.
+// If flags is nil, the default command line flags will be taken.
+// Pass non-empty strings as installName and uninstallName to override the default
+// flag names.
+func (f *CLI) AddFlags(flags *flag.FlagSet) {
+ if flags == nil {
+ flags = flag.CommandLine
+ }
+
+ if f.InstallName == "" {
+ f.InstallName = defaultInstallName
+ }
+ if f.UninstallName == "" {
+ f.UninstallName = defaultUninstallName
+ }
+
+ if flags.Lookup(f.InstallName) == nil {
+ flags.BoolVar(&f.install, f.InstallName, false,
+ fmt.Sprintf("Install completion for %s command", f.Name))
+ }
+ if flags.Lookup(f.UninstallName) == nil {
+ flags.BoolVar(&f.uninstall, f.UninstallName, false,
+ fmt.Sprintf("Uninstall completion for %s command", f.Name))
+ }
+ if flags.Lookup("y") == nil {
+ flags.BoolVar(&f.yes, "y", false, "Don't prompt user for typing 'yes'")
+ }
+}
+
+// validate the CLI
+func (f *CLI) validate() error {
+ if f.install && f.uninstall {
+ return errors.New("Install and uninstall are mutually exclusive")
+ }
+ return nil
+}
+
+// action name according to the CLI values.
+func (f *CLI) action() string {
+ switch {
+ case f.install:
+ return "Install"
+ case f.uninstall:
+ return "Uninstall"
+ default:
+ return "unknown"
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/bash.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/bash.go
new file mode 100644
index 00000000..a287f998
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/bash.go
@@ -0,0 +1,32 @@
+package install
+
+import "fmt"
+
+// (un)install in bash
+// basically adds/remove from .bashrc:
+//
+// complete -C </path/to/completion/command> <command>
+type bash struct {
+ rc string
+}
+
+func (b bash) Install(cmd, bin string) error {
+ completeCmd := b.cmd(cmd, bin)
+ if lineInFile(b.rc, completeCmd) {
+ return fmt.Errorf("already installed in %s", b.rc)
+ }
+ return appendToFile(b.rc, completeCmd)
+}
+
+func (b bash) Uninstall(cmd, bin string) error {
+ completeCmd := b.cmd(cmd, bin)
+ if !lineInFile(b.rc, completeCmd) {
+ return fmt.Errorf("does not installed in %s", b.rc)
+ }
+
+ return removeFromFile(b.rc, completeCmd)
+}
+
+func (bash) cmd(cmd, bin string) string {
+ return fmt.Sprintf("complete -C %s %s", bin, cmd)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/install.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/install.go
new file mode 100644
index 00000000..fb44b2b7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/install.go
@@ -0,0 +1,92 @@
+package install
+
+import (
+ "errors"
+ "os"
+ "os/user"
+ "path/filepath"
+
+ "github.com/hashicorp/go-multierror"
+)
+
+type installer interface {
+ Install(cmd, bin string) error
+ Uninstall(cmd, bin string) error
+}
+
+// Install complete command given:
+// cmd: is the command name
+func Install(cmd string) error {
+ is := installers()
+ if len(is) == 0 {
+ return errors.New("Did not found any shells to install")
+ }
+ bin, err := getBinaryPath()
+ if err != nil {
+ return err
+ }
+
+ for _, i := range is {
+ errI := i.Install(cmd, bin)
+ if errI != nil {
+ err = multierror.Append(err, errI)
+ }
+ }
+
+ return err
+}
+
+// Uninstall complete command given:
+// cmd: is the command name
+func Uninstall(cmd string) error {
+ is := installers()
+ if len(is) == 0 {
+ return errors.New("Did not found any shells to uninstall")
+ }
+ bin, err := getBinaryPath()
+ if err != nil {
+ return err
+ }
+
+ for _, i := range is {
+ errI := i.Uninstall(cmd, bin)
+ if errI != nil {
+ multierror.Append(err, errI)
+ }
+ }
+
+ return err
+}
+
+func installers() (i []installer) {
+ for _, rc := range [...]string{".bashrc", ".bash_profile"} {
+ if f := rcFile(rc); f != "" {
+ i = append(i, bash{f})
+ break
+ }
+ }
+ if f := rcFile(".zshrc"); f != "" {
+ i = append(i, zsh{f})
+ }
+ return
+}
+
+func getBinaryPath() (string, error) {
+ bin, err := os.Executable()
+ if err != nil {
+ return "", err
+ }
+ return filepath.Abs(bin)
+}
+
+func rcFile(name string) string {
+ u, err := user.Current()
+ if err != nil {
+ return ""
+ }
+ path := filepath.Join(u.HomeDir, name)
+ if _, err := os.Stat(path); err != nil {
+ return ""
+ }
+ return path
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/utils.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/utils.go
new file mode 100644
index 00000000..2c8b44ca
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/utils.go
@@ -0,0 +1,118 @@
+package install
+
+import (
+ "bufio"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+)
+
+func lineInFile(name string, lookFor string) bool {
+ f, err := os.Open(name)
+ if err != nil {
+ return false
+ }
+ defer f.Close()
+ r := bufio.NewReader(f)
+ prefix := []byte{}
+ for {
+ line, isPrefix, err := r.ReadLine()
+ if err == io.EOF {
+ return false
+ }
+ if err != nil {
+ return false
+ }
+ if isPrefix {
+ prefix = append(prefix, line...)
+ continue
+ }
+ line = append(prefix, line...)
+ if string(line) == lookFor {
+ return true
+ }
+ prefix = prefix[:0]
+ }
+}
+
+func appendToFile(name string, content string) error {
+ f, err := os.OpenFile(name, os.O_RDWR|os.O_APPEND, 0)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ _, err = f.WriteString(fmt.Sprintf("\n%s\n", content))
+ return err
+}
+
+func removeFromFile(name string, content string) error {
+ backup := name + ".bck"
+ err := copyFile(name, backup)
+ if err != nil {
+ return err
+ }
+ temp, err := removeContentToTempFile(name, content)
+ if err != nil {
+ return err
+ }
+
+ err = copyFile(temp, name)
+ if err != nil {
+ return err
+ }
+
+ return os.Remove(backup)
+}
+
+func removeContentToTempFile(name, content string) (string, error) {
+ rf, err := os.Open(name)
+ if err != nil {
+ return "", err
+ }
+ defer rf.Close()
+ wf, err := ioutil.TempFile("/tmp", "complete-")
+ if err != nil {
+ return "", err
+ }
+ defer wf.Close()
+
+ r := bufio.NewReader(rf)
+ prefix := []byte{}
+ for {
+ line, isPrefix, err := r.ReadLine()
+ if err == io.EOF {
+ break
+ }
+ if err != nil {
+ return "", err
+ }
+ if isPrefix {
+ prefix = append(prefix, line...)
+ continue
+ }
+ line = append(prefix, line...)
+ str := string(line)
+ if str == content {
+ continue
+ }
+ wf.WriteString(str + "\n")
+ prefix = prefix[:0]
+ }
+ return wf.Name(), nil
+}
+
+func copyFile(src string, dst string) error {
+ in, err := os.Open(src)
+ if err != nil {
+ return err
+ }
+ defer in.Close()
+ out, err := os.Create(dst)
+ if err != nil {
+ return err
+ }
+ defer out.Close()
+ _, err = io.Copy(out, in)
+ return err
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/zsh.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/zsh.go
new file mode 100644
index 00000000..9ece7799
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/cmd/install/zsh.go
@@ -0,0 +1,39 @@
+package install
+
+import "fmt"
+
+// (un)install in zsh
+// basically adds/remove from .zshrc:
+//
+// autoload -U +X bashcompinit && bashcompinit"
+// complete -C </path/to/completion/command> <command>
+type zsh struct {
+ rc string
+}
+
+func (z zsh) Install(cmd, bin string) error {
+ completeCmd := z.cmd(cmd, bin)
+ if lineInFile(z.rc, completeCmd) {
+ return fmt.Errorf("already installed in %s", z.rc)
+ }
+
+ bashCompInit := "autoload -U +X bashcompinit && bashcompinit"
+ if !lineInFile(z.rc, bashCompInit) {
+ completeCmd = bashCompInit + "\n" + completeCmd
+ }
+
+ return appendToFile(z.rc, completeCmd)
+}
+
+func (z zsh) Uninstall(cmd, bin string) error {
+ completeCmd := z.cmd(cmd, bin)
+ if !lineInFile(z.rc, completeCmd) {
+ return fmt.Errorf("does not installed in %s", z.rc)
+ }
+
+ return removeFromFile(z.rc, completeCmd)
+}
+
+func (zsh) cmd(cmd, bin string) string {
+ return fmt.Sprintf("complete -C %s %s", bin, cmd)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/command.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/command.go
new file mode 100644
index 00000000..eeeb9e02
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/command.go
@@ -0,0 +1,106 @@
+package complete
+
+import "github.com/posener/complete/match"
+
+// Command represents a command line
+// It holds the data that enables auto completion of command line
+// Command can also be a sub command.
+type Command struct {
+ // Sub is map of sub commands of the current command
+ // The key refer to the sub command name, and the value is it's
+ // Command descriptive struct.
+ Sub Commands
+
+ // Flags is a map of flags that the command accepts.
+ // The key is the flag name, and the value is it's predictions.
+ Flags Flags
+
+ // GlobalFlags is a map of flags that the command accepts.
+ // Global flags that can appear also after a sub command.
+ GlobalFlags Flags
+
+ // Args are extra arguments that the command accepts, those who are
+ // given without any flag before.
+ Args Predictor
+}
+
+// Predict returns all possible predictions for args according to the command struct
+func (c *Command) Predict(a Args) (predictions []string) {
+ predictions, _ = c.predict(a)
+ return
+}
+
+// Commands is the type of Sub member, it maps a command name to a command struct
+type Commands map[string]Command
+
+// Predict completion of sub command names names according to command line arguments
+func (c Commands) Predict(a Args) (prediction []string) {
+ for sub := range c {
+ if match.Prefix(sub, a.Last) {
+ prediction = append(prediction, sub)
+ }
+ }
+ return
+}
+
+// Flags is the type Flags of the Flags member, it maps a flag name to the flag predictions.
+type Flags map[string]Predictor
+
+// Predict completion of flags names according to command line arguments
+func (f Flags) Predict(a Args) (prediction []string) {
+ for flag := range f {
+ if match.Prefix(flag, a.Last) {
+ prediction = append(prediction, flag)
+ }
+ }
+ return
+}
+
+// predict options
+// only is set to true if no more options are allowed to be returned
+// those are in cases of special flag that has specific completion arguments,
+// and other flags or sub commands can't come after it.
+func (c *Command) predict(a Args) (options []string, only bool) {
+
+ // search sub commands for predictions first
+ subCommandFound := false
+ for i, arg := range a.Completed {
+ if cmd, ok := c.Sub[arg]; ok {
+ subCommandFound = true
+
+ // recursive call for sub command
+ options, only = cmd.predict(a.from(i))
+ if only {
+ return
+ }
+ }
+ }
+
+ // if last completed word is a global flag that we need to complete
+ if predictor, ok := c.GlobalFlags[a.LastCompleted]; ok && predictor != nil {
+ Log("Predicting according to global flag %s", a.LastCompleted)
+ return predictor.Predict(a), true
+ }
+
+ options = append(options, c.GlobalFlags.Predict(a)...)
+
+ // if a sub command was entered, we won't add the parent command
+ // completions and we return here.
+ if subCommandFound {
+ return
+ }
+
+ // if last completed word is a command flag that we need to complete
+ if predictor, ok := c.Flags[a.LastCompleted]; ok && predictor != nil {
+ Log("Predicting according to flag %s", a.LastCompleted)
+ return predictor.Predict(a), true
+ }
+
+ options = append(options, c.Sub.Predict(a)...)
+ options = append(options, c.Flags.Predict(a)...)
+ if c.Args != nil {
+ options = append(options, c.Args.Predict(a)...)
+ }
+
+ return
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/complete.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/complete.go
new file mode 100644
index 00000000..1df66170
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/complete.go
@@ -0,0 +1,86 @@
+// Package complete provides a tool for bash writing bash completion in go.
+//
+// Writing bash completion scripts is a hard work. This package provides an easy way
+// to create bash completion scripts for any command, and also an easy way to install/uninstall
+// the completion of the command.
+package complete
+
+import (
+ "flag"
+ "fmt"
+ "os"
+ "strings"
+
+ "github.com/posener/complete/cmd"
+)
+
+const (
+ envComplete = "COMP_LINE"
+ envDebug = "COMP_DEBUG"
+)
+
+// Complete structs define completion for a command with CLI options
+type Complete struct {
+ Command Command
+ cmd.CLI
+}
+
+// New creates a new complete command.
+// name is the name of command we want to auto complete.
+// IMPORTANT: it must be the same name - if the auto complete
+// completes the 'go' command, name must be equal to "go".
+// command is the struct of the command completion.
+func New(name string, command Command) *Complete {
+ return &Complete{
+ Command: command,
+ CLI: cmd.CLI{Name: name},
+ }
+}
+
+// Run runs the completion and add installation flags beforehand.
+// The flags are added to the main flag CommandLine variable.
+func (c *Complete) Run() bool {
+ c.AddFlags(nil)
+ flag.Parse()
+ return c.Complete()
+}
+
+// Complete a command from completion line in environment variable,
+// and print out the complete options.
+// returns success if the completion ran or if the cli matched
+// any of the given flags, false otherwise
+// For installation: it assumes that flags were added and parsed before
+// it was called.
+func (c *Complete) Complete() bool {
+ line, ok := getLine()
+ if !ok {
+ // make sure flags parsed,
+ // in case they were not added in the main program
+ return c.CLI.Run()
+ }
+ Log("Completing line: %s", line)
+
+ a := newArgs(line)
+
+ options := c.Command.Predict(a)
+
+ Log("Completion: %s", options)
+ output(options)
+ return true
+}
+
+func getLine() ([]string, bool) {
+ line := os.Getenv(envComplete)
+ if line == "" {
+ return nil, false
+ }
+ return strings.Split(line, " "), true
+}
+
+func output(options []string) {
+ Log("")
+ // stdout of program defines the complete options
+ for _, option := range options {
+ fmt.Println(option)
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/log.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/log.go
new file mode 100644
index 00000000..797a80ce
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/log.go
@@ -0,0 +1,23 @@
+package complete
+
+import (
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+)
+
+// Log is used for debugging purposes
+// since complete is running on tab completion, it is nice to
+// have logs to the stderr (when writing your own completer)
+// to write logs, set the COMP_DEBUG environment variable and
+// use complete.Log in the complete program
+var Log = getLogger()
+
+func getLogger() func(format string, args ...interface{}) {
+ var logfile io.Writer = ioutil.Discard
+ if os.Getenv(envDebug) != "" {
+ logfile = os.Stderr
+ }
+ return log.New(logfile, "complete ", log.Flags()).Printf
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/file.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/file.go
new file mode 100644
index 00000000..051171e8
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/file.go
@@ -0,0 +1,19 @@
+package match
+
+import "strings"
+
+// File returns true if prefix can match the file
+func File(file, prefix string) bool {
+ // special case for current directory completion
+ if file == "./" && (prefix == "." || prefix == "") {
+ return true
+ }
+ if prefix == "." && strings.HasPrefix(file, ".") {
+ return true
+ }
+
+ file = strings.TrimPrefix(file, "./")
+ prefix = strings.TrimPrefix(prefix, "./")
+
+ return strings.HasPrefix(file, prefix)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/match.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/match.go
new file mode 100644
index 00000000..812fcac9
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/match.go
@@ -0,0 +1,6 @@
+package match
+
+// Match matches two strings
+// it is used for comparing a term to the last typed
+// word, the prefix, and see if it is a possible auto complete option.
+type Match func(term, prefix string) bool
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/prefix.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/prefix.go
new file mode 100644
index 00000000..9a01ba63
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/match/prefix.go
@@ -0,0 +1,9 @@
+package match
+
+import "strings"
+
+// Prefix is a simple Matcher, if the word is it's prefix, there is a match
+// Match returns true if a has the prefix as prefix
+func Prefix(long, prefix string) bool {
+ return strings.HasPrefix(long, prefix)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/metalinter.json b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/metalinter.json
new file mode 100644
index 00000000..799c1d03
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/metalinter.json
@@ -0,0 +1,21 @@
+{
+ "Vendor": true,
+ "DisableAll": true,
+ "Enable": [
+ "gofmt",
+ "goimports",
+ "interfacer",
+ "goconst",
+ "misspell",
+ "unconvert",
+ "gosimple",
+ "golint",
+ "structcheck",
+ "deadcode",
+ "vet"
+ ],
+ "Exclude": [
+ "initTests is unused"
+ ],
+ "Deadline": "2m"
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict.go
new file mode 100644
index 00000000..82070632
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict.go
@@ -0,0 +1,41 @@
+package complete
+
+// Predictor implements a predict method, in which given
+// command line arguments returns a list of options it predicts.
+type Predictor interface {
+ Predict(Args) []string
+}
+
+// PredictOr unions two predicate functions, so that the result predicate
+// returns the union of their predication
+func PredictOr(predictors ...Predictor) Predictor {
+ return PredictFunc(func(a Args) (prediction []string) {
+ for _, p := range predictors {
+ if p == nil {
+ continue
+ }
+ prediction = append(prediction, p.Predict(a)...)
+ }
+ return
+ })
+}
+
+// PredictFunc determines what terms can follow a command or a flag
+// It is used for auto completion, given last - the last word in the already
+// in the command line, what words can complete it.
+type PredictFunc func(Args) []string
+
+// Predict invokes the predict function and implements the Predictor interface
+func (p PredictFunc) Predict(a Args) []string {
+ if p == nil {
+ return nil
+ }
+ return p(a)
+}
+
+// PredictNothing does not expect anything after.
+var PredictNothing Predictor
+
+// PredictAnything expects something, but nothing particular, such as a number
+// or arbitrary name.
+var PredictAnything = PredictFunc(func(Args) []string { return nil })
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_files.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_files.go
new file mode 100644
index 00000000..c8adf7e8
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_files.go
@@ -0,0 +1,108 @@
+package complete
+
+import (
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/posener/complete/match"
+)
+
+// PredictDirs will search for directories in the given started to be typed
+// path, if no path was started to be typed, it will complete to directories
+// in the current working directory.
+func PredictDirs(pattern string) Predictor {
+ return files(pattern, false)
+}
+
+// PredictFiles will search for files matching the given pattern in the started to
+// be typed path, if no path was started to be typed, it will complete to files that
+// match the pattern in the current working directory.
+// To match any file, use "*" as pattern. To match go files use "*.go", and so on.
+func PredictFiles(pattern string) Predictor {
+ return files(pattern, true)
+}
+
+func files(pattern string, allowFiles bool) PredictFunc {
+
+ // search for files according to arguments,
+ // if only one directory has matched the result, search recursively into
+ // this directory to give more results.
+ return func(a Args) (prediction []string) {
+ prediction = predictFiles(a, pattern, allowFiles)
+
+ // if the number of prediction is not 1, we either have many results or
+ // have no results, so we return it.
+ if len(prediction) != 1 {
+ return
+ }
+
+ // only try deeper, if the one item is a directory
+ if stat, err := os.Stat(prediction[0]); err != nil || !stat.IsDir() {
+ return
+ }
+
+ a.Last = prediction[0]
+ return predictFiles(a, pattern, allowFiles)
+ }
+}
+
+func predictFiles(a Args, pattern string, allowFiles bool) []string {
+ if strings.HasSuffix(a.Last, "/..") {
+ return nil
+ }
+
+ dir := a.Directory()
+ files := listFiles(dir, pattern, allowFiles)
+
+ // add dir if match
+ files = append(files, dir)
+
+ return PredictFilesSet(files).Predict(a)
+}
+
+// PredictFilesSet predict according to file rules to a given set of file names
+func PredictFilesSet(files []string) PredictFunc {
+ return func(a Args) (prediction []string) {
+ // add all matching files to prediction
+ for _, f := range files {
+ f = fixPathForm(a.Last, f)
+
+ // test matching of file to the argument
+ if match.File(f, a.Last) {
+ prediction = append(prediction, f)
+ }
+ }
+ return
+ }
+}
+
+func listFiles(dir, pattern string, allowFiles bool) []string {
+ // set of all file names
+ m := map[string]bool{}
+
+ // list files
+ if files, err := filepath.Glob(filepath.Join(dir, pattern)); err == nil {
+ for _, f := range files {
+ if stat, err := os.Stat(f); err != nil || stat.IsDir() || allowFiles {
+ m[f] = true
+ }
+ }
+ }
+
+ // list directories
+ if dirs, err := ioutil.ReadDir(dir); err == nil {
+ for _, d := range dirs {
+ if d.IsDir() {
+ m[filepath.Join(dir, d.Name())] = true
+ }
+ }
+ }
+
+ list := make([]string, 0, len(m))
+ for k := range m {
+ list = append(list, k)
+ }
+ return list
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_set.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_set.go
new file mode 100644
index 00000000..8fc59d71
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/predict_set.go
@@ -0,0 +1,19 @@
+package complete
+
+import "github.com/posener/complete/match"
+
+// PredictSet expects specific set of terms, given in the options argument.
+func PredictSet(options ...string) Predictor {
+ return predictSet(options)
+}
+
+type predictSet []string
+
+func (p predictSet) Predict(a Args) (prediction []string) {
+ for _, m := range p {
+ if match.Prefix(m, a.Last) {
+ prediction = append(prediction, m)
+ }
+ }
+ return
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/readme.md b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/readme.md
new file mode 100644
index 00000000..74077e35
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/readme.md
@@ -0,0 +1,116 @@
+# complete
+
+[![Build Status](https://travis-ci.org/posener/complete.svg?branch=master)](https://travis-ci.org/posener/complete)
+[![codecov](https://codecov.io/gh/posener/complete/branch/master/graph/badge.svg)](https://codecov.io/gh/posener/complete)
+[![GoDoc](https://godoc.org/github.com/posener/complete?status.svg)](http://godoc.org/github.com/posener/complete)
+[![Go Report Card](https://goreportcard.com/badge/github.com/posener/complete)](https://goreportcard.com/report/github.com/posener/complete)
+
+A tool for bash writing bash completion in go.
+
+Writing bash completion scripts is a hard work. This package provides an easy way
+to create bash completion scripts for any command, and also an easy way to install/uninstall
+the completion of the command.
+
+## go command bash completion
+
+In [gocomplete](./gocomplete) there is an example for bash completion for the `go` command line.
+
+This is an example that uses the `complete` package on the `go` command - the `complete` package
+can also be used to implement any completions, see [Usage](#usage).
+
+### Install
+
+1. Type in your shell:
+```
+go get -u github.com/posener/complete/gocomplete
+gocomplete -install
+```
+
+2. Restart your shell
+
+Uninstall by `gocomplete -uninstall`
+
+### Features
+
+- Complete `go` command, including sub commands and all flags.
+- Complete packages names or `.go` files when necessary.
+- Complete test names after `-run` flag.
+
+## complete package
+
+Supported shells:
+
+- [x] bash
+- [x] zsh
+
+### Usage
+
+Assuming you have program called `run` and you want to have bash completion
+for it, meaning, if you type `run` then space, then press the `Tab` key,
+the shell will suggest relevant complete options.
+
+In that case, we will create a program called `runcomplete`, a go program,
+with a `func main()` and so, that will make the completion of the `run`
+program. Once the `runcomplete` will be in a binary form, we could
+`runcomplete -install` and that will add to our shell all the bash completion
+options for `run`.
+
+So here it is:
+
+```go
+import "github.com/posener/complete"
+
+func main() {
+
+ // create a Command object, that represents the command we want
+ // to complete.
+ run := complete.Command{
+
+ // Sub defines a list of sub commands of the program,
+ // this is recursive, since every command is of type command also.
+ Sub: complete.Commands{
+
+ // add a build sub command
+ "build": complete.Command {
+
+ // define flags of the build sub command
+ Flags: complete.Flags{
+ // build sub command has a flag '-cpus', which
+ // expects number of cpus after it. in that case
+ // anything could complete this flag.
+ "-cpus": complete.PredictAnything,
+ },
+ },
+ },
+
+ // define flags of the 'run' main command
+ Flags: complete.Flags{
+ // a flag -o, which expects a file ending with .out after
+ // it, the tab completion will auto complete for files matching
+ // the given pattern.
+ "-o": complete.PredictFiles("*.out"),
+ },
+
+ // define global flags of the 'run' main command
+ // those will show up also when a sub command was entered in the
+ // command line
+ GlobalFlags: complete.Flags{
+
+ // a flag '-h' which does not expects anything after it
+ "-h": complete.PredictNothing,
+ },
+ }
+
+ // run the command completion, as part of the main() function.
+ // this triggers the autocompletion when needed.
+ // name must be exactly as the binary that we want to complete.
+ complete.New("run", run).Run()
+}
+```
+
+### Self completing program
+
+In case that the program that we want to complete is written in go we
+can make it self completing.
+
+Here is an [example](./example/self/main.go)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/test.sh b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/test.sh
new file mode 100755
index 00000000..56bfcf15
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/test.sh
@@ -0,0 +1,12 @@
+#!/usr/bin/env bash
+
+set -e
+echo "" > coverage.txt
+
+for d in $(go list ./... | grep -v vendor); do
+ go test -v -race -coverprofile=profile.out -covermode=atomic $d
+ if [ -f profile.out ]; then
+ cat profile.out >> coverage.txt
+ rm profile.out
+ fi
+done \ No newline at end of file
diff --git a/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/utils.go b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/utils.go
new file mode 100644
index 00000000..58b8b792
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/github.com/posener/complete/utils.go
@@ -0,0 +1,46 @@
+package complete
+
+import (
+ "os"
+ "path/filepath"
+ "strings"
+)
+
+// fixPathForm changes a file name to a relative name
+func fixPathForm(last string, file string) string {
+ // get wording directory for relative name
+ workDir, err := os.Getwd()
+ if err != nil {
+ return file
+ }
+
+ abs, err := filepath.Abs(file)
+ if err != nil {
+ return file
+ }
+
+ // if last is absolute, return path as absolute
+ if filepath.IsAbs(last) {
+ return fixDirPath(abs)
+ }
+
+ rel, err := filepath.Rel(workDir, abs)
+ if err != nil {
+ return file
+ }
+
+ // fix ./ prefix of path
+ if rel != "." && strings.HasPrefix(last, ".") {
+ rel = "./" + rel
+ }
+
+ return fixDirPath(rel)
+}
+
+func fixDirPath(path string) string {
+ info, err := os.Stat(path)
+ if err == nil && info.IsDir() && !strings.HasSuffix(path, "/") {
+ path += "/"
+ }
+ return path
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/ciphers.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/ciphers.go
new file mode 100644
index 00000000..698860b7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/ciphers.go
@@ -0,0 +1,641 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package http2
+
+// A list of the possible cipher suite ids. Taken from
+// http://www.iana.org/assignments/tls-parameters/tls-parameters.txt
+
+const (
+ cipher_TLS_NULL_WITH_NULL_NULL uint16 = 0x0000
+ cipher_TLS_RSA_WITH_NULL_MD5 uint16 = 0x0001
+ cipher_TLS_RSA_WITH_NULL_SHA uint16 = 0x0002
+ cipher_TLS_RSA_EXPORT_WITH_RC4_40_MD5 uint16 = 0x0003
+ cipher_TLS_RSA_WITH_RC4_128_MD5 uint16 = 0x0004
+ cipher_TLS_RSA_WITH_RC4_128_SHA uint16 = 0x0005
+ cipher_TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 uint16 = 0x0006
+ cipher_TLS_RSA_WITH_IDEA_CBC_SHA uint16 = 0x0007
+ cipher_TLS_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0008
+ cipher_TLS_RSA_WITH_DES_CBC_SHA uint16 = 0x0009
+ cipher_TLS_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x000A
+ cipher_TLS_DH_DSS_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x000B
+ cipher_TLS_DH_DSS_WITH_DES_CBC_SHA uint16 = 0x000C
+ cipher_TLS_DH_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0x000D
+ cipher_TLS_DH_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x000E
+ cipher_TLS_DH_RSA_WITH_DES_CBC_SHA uint16 = 0x000F
+ cipher_TLS_DH_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x0010
+ cipher_TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0011
+ cipher_TLS_DHE_DSS_WITH_DES_CBC_SHA uint16 = 0x0012
+ cipher_TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0x0013
+ cipher_TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0014
+ cipher_TLS_DHE_RSA_WITH_DES_CBC_SHA uint16 = 0x0015
+ cipher_TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0x0016
+ cipher_TLS_DH_anon_EXPORT_WITH_RC4_40_MD5 uint16 = 0x0017
+ cipher_TLS_DH_anon_WITH_RC4_128_MD5 uint16 = 0x0018
+ cipher_TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA uint16 = 0x0019
+ cipher_TLS_DH_anon_WITH_DES_CBC_SHA uint16 = 0x001A
+ cipher_TLS_DH_anon_WITH_3DES_EDE_CBC_SHA uint16 = 0x001B
+ // Reserved uint16 = 0x001C-1D
+ cipher_TLS_KRB5_WITH_DES_CBC_SHA uint16 = 0x001E
+ cipher_TLS_KRB5_WITH_3DES_EDE_CBC_SHA uint16 = 0x001F
+ cipher_TLS_KRB5_WITH_RC4_128_SHA uint16 = 0x0020
+ cipher_TLS_KRB5_WITH_IDEA_CBC_SHA uint16 = 0x0021
+ cipher_TLS_KRB5_WITH_DES_CBC_MD5 uint16 = 0x0022
+ cipher_TLS_KRB5_WITH_3DES_EDE_CBC_MD5 uint16 = 0x0023
+ cipher_TLS_KRB5_WITH_RC4_128_MD5 uint16 = 0x0024
+ cipher_TLS_KRB5_WITH_IDEA_CBC_MD5 uint16 = 0x0025
+ cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA uint16 = 0x0026
+ cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_SHA uint16 = 0x0027
+ cipher_TLS_KRB5_EXPORT_WITH_RC4_40_SHA uint16 = 0x0028
+ cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5 uint16 = 0x0029
+ cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5 uint16 = 0x002A
+ cipher_TLS_KRB5_EXPORT_WITH_RC4_40_MD5 uint16 = 0x002B
+ cipher_TLS_PSK_WITH_NULL_SHA uint16 = 0x002C
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA uint16 = 0x002D
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA uint16 = 0x002E
+ cipher_TLS_RSA_WITH_AES_128_CBC_SHA uint16 = 0x002F
+ cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA uint16 = 0x0030
+ cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA uint16 = 0x0031
+ cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA uint16 = 0x0032
+ cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA uint16 = 0x0033
+ cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA uint16 = 0x0034
+ cipher_TLS_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0035
+ cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA uint16 = 0x0036
+ cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0037
+ cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA uint16 = 0x0038
+ cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA uint16 = 0x0039
+ cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA uint16 = 0x003A
+ cipher_TLS_RSA_WITH_NULL_SHA256 uint16 = 0x003B
+ cipher_TLS_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x003C
+ cipher_TLS_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x003D
+ cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA256 uint16 = 0x003E
+ cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x003F
+ cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 uint16 = 0x0040
+ cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0041
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0042
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0043
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0044
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0045
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA uint16 = 0x0046
+ // Reserved uint16 = 0x0047-4F
+ // Reserved uint16 = 0x0050-58
+ // Reserved uint16 = 0x0059-5C
+ // Unassigned uint16 = 0x005D-5F
+ // Reserved uint16 = 0x0060-66
+ cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0x0067
+ cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA256 uint16 = 0x0068
+ cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x0069
+ cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 uint16 = 0x006A
+ cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 uint16 = 0x006B
+ cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA256 uint16 = 0x006C
+ cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA256 uint16 = 0x006D
+ // Unassigned uint16 = 0x006E-83
+ cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0084
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0085
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0086
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0087
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0088
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA uint16 = 0x0089
+ cipher_TLS_PSK_WITH_RC4_128_SHA uint16 = 0x008A
+ cipher_TLS_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x008B
+ cipher_TLS_PSK_WITH_AES_128_CBC_SHA uint16 = 0x008C
+ cipher_TLS_PSK_WITH_AES_256_CBC_SHA uint16 = 0x008D
+ cipher_TLS_DHE_PSK_WITH_RC4_128_SHA uint16 = 0x008E
+ cipher_TLS_DHE_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x008F
+ cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA uint16 = 0x0090
+ cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA uint16 = 0x0091
+ cipher_TLS_RSA_PSK_WITH_RC4_128_SHA uint16 = 0x0092
+ cipher_TLS_RSA_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0x0093
+ cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA uint16 = 0x0094
+ cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA uint16 = 0x0095
+ cipher_TLS_RSA_WITH_SEED_CBC_SHA uint16 = 0x0096
+ cipher_TLS_DH_DSS_WITH_SEED_CBC_SHA uint16 = 0x0097
+ cipher_TLS_DH_RSA_WITH_SEED_CBC_SHA uint16 = 0x0098
+ cipher_TLS_DHE_DSS_WITH_SEED_CBC_SHA uint16 = 0x0099
+ cipher_TLS_DHE_RSA_WITH_SEED_CBC_SHA uint16 = 0x009A
+ cipher_TLS_DH_anon_WITH_SEED_CBC_SHA uint16 = 0x009B
+ cipher_TLS_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009C
+ cipher_TLS_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009D
+ cipher_TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x009E
+ cipher_TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x009F
+ cipher_TLS_DH_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0x00A0
+ cipher_TLS_DH_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0x00A1
+ cipher_TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 uint16 = 0x00A2
+ cipher_TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 uint16 = 0x00A3
+ cipher_TLS_DH_DSS_WITH_AES_128_GCM_SHA256 uint16 = 0x00A4
+ cipher_TLS_DH_DSS_WITH_AES_256_GCM_SHA384 uint16 = 0x00A5
+ cipher_TLS_DH_anon_WITH_AES_128_GCM_SHA256 uint16 = 0x00A6
+ cipher_TLS_DH_anon_WITH_AES_256_GCM_SHA384 uint16 = 0x00A7
+ cipher_TLS_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00A8
+ cipher_TLS_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00A9
+ cipher_TLS_DHE_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00AA
+ cipher_TLS_DHE_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00AB
+ cipher_TLS_RSA_PSK_WITH_AES_128_GCM_SHA256 uint16 = 0x00AC
+ cipher_TLS_RSA_PSK_WITH_AES_256_GCM_SHA384 uint16 = 0x00AD
+ cipher_TLS_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00AE
+ cipher_TLS_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00AF
+ cipher_TLS_PSK_WITH_NULL_SHA256 uint16 = 0x00B0
+ cipher_TLS_PSK_WITH_NULL_SHA384 uint16 = 0x00B1
+ cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00B2
+ cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00B3
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA256 uint16 = 0x00B4
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA384 uint16 = 0x00B5
+ cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0x00B6
+ cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0x00B7
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA256 uint16 = 0x00B8
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA384 uint16 = 0x00B9
+ cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BA
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BB
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BC
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BD
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BE
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0x00BF
+ cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C0
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C1
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C2
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C3
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C4
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256 uint16 = 0x00C5
+ // Unassigned uint16 = 0x00C6-FE
+ cipher_TLS_EMPTY_RENEGOTIATION_INFO_SCSV uint16 = 0x00FF
+ // Unassigned uint16 = 0x01-55,*
+ cipher_TLS_FALLBACK_SCSV uint16 = 0x5600
+ // Unassigned uint16 = 0x5601 - 0xC000
+ cipher_TLS_ECDH_ECDSA_WITH_NULL_SHA uint16 = 0xC001
+ cipher_TLS_ECDH_ECDSA_WITH_RC4_128_SHA uint16 = 0xC002
+ cipher_TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC003
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA uint16 = 0xC004
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA uint16 = 0xC005
+ cipher_TLS_ECDHE_ECDSA_WITH_NULL_SHA uint16 = 0xC006
+ cipher_TLS_ECDHE_ECDSA_WITH_RC4_128_SHA uint16 = 0xC007
+ cipher_TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC008
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA uint16 = 0xC009
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA uint16 = 0xC00A
+ cipher_TLS_ECDH_RSA_WITH_NULL_SHA uint16 = 0xC00B
+ cipher_TLS_ECDH_RSA_WITH_RC4_128_SHA uint16 = 0xC00C
+ cipher_TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC00D
+ cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC00E
+ cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC00F
+ cipher_TLS_ECDHE_RSA_WITH_NULL_SHA uint16 = 0xC010
+ cipher_TLS_ECDHE_RSA_WITH_RC4_128_SHA uint16 = 0xC011
+ cipher_TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC012
+ cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC013
+ cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC014
+ cipher_TLS_ECDH_anon_WITH_NULL_SHA uint16 = 0xC015
+ cipher_TLS_ECDH_anon_WITH_RC4_128_SHA uint16 = 0xC016
+ cipher_TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA uint16 = 0xC017
+ cipher_TLS_ECDH_anon_WITH_AES_128_CBC_SHA uint16 = 0xC018
+ cipher_TLS_ECDH_anon_WITH_AES_256_CBC_SHA uint16 = 0xC019
+ cipher_TLS_SRP_SHA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01A
+ cipher_TLS_SRP_SHA_RSA_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01B
+ cipher_TLS_SRP_SHA_DSS_WITH_3DES_EDE_CBC_SHA uint16 = 0xC01C
+ cipher_TLS_SRP_SHA_WITH_AES_128_CBC_SHA uint16 = 0xC01D
+ cipher_TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA uint16 = 0xC01E
+ cipher_TLS_SRP_SHA_DSS_WITH_AES_128_CBC_SHA uint16 = 0xC01F
+ cipher_TLS_SRP_SHA_WITH_AES_256_CBC_SHA uint16 = 0xC020
+ cipher_TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA uint16 = 0xC021
+ cipher_TLS_SRP_SHA_DSS_WITH_AES_256_CBC_SHA uint16 = 0xC022
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC023
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC024
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC025
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC026
+ cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC027
+ cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC028
+ cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 uint16 = 0xC029
+ cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 uint16 = 0xC02A
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02B
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC02C
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02D
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC02E
+ cipher_TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC02F
+ cipher_TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC030
+ cipher_TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 uint16 = 0xC031
+ cipher_TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 uint16 = 0xC032
+ cipher_TLS_ECDHE_PSK_WITH_RC4_128_SHA uint16 = 0xC033
+ cipher_TLS_ECDHE_PSK_WITH_3DES_EDE_CBC_SHA uint16 = 0xC034
+ cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA uint16 = 0xC035
+ cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA uint16 = 0xC036
+ cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256 uint16 = 0xC037
+ cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384 uint16 = 0xC038
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA uint16 = 0xC039
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA256 uint16 = 0xC03A
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA384 uint16 = 0xC03B
+ cipher_TLS_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC03C
+ cipher_TLS_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC03D
+ cipher_TLS_DH_DSS_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC03E
+ cipher_TLS_DH_DSS_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC03F
+ cipher_TLS_DH_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC040
+ cipher_TLS_DH_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC041
+ cipher_TLS_DHE_DSS_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC042
+ cipher_TLS_DHE_DSS_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC043
+ cipher_TLS_DHE_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC044
+ cipher_TLS_DHE_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC045
+ cipher_TLS_DH_anon_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC046
+ cipher_TLS_DH_anon_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC047
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC048
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC049
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04A
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04B
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04C
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04D
+ cipher_TLS_ECDH_RSA_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC04E
+ cipher_TLS_ECDH_RSA_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC04F
+ cipher_TLS_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC050
+ cipher_TLS_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC051
+ cipher_TLS_DHE_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC052
+ cipher_TLS_DHE_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC053
+ cipher_TLS_DH_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC054
+ cipher_TLS_DH_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC055
+ cipher_TLS_DHE_DSS_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC056
+ cipher_TLS_DHE_DSS_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC057
+ cipher_TLS_DH_DSS_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC058
+ cipher_TLS_DH_DSS_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC059
+ cipher_TLS_DH_anon_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05A
+ cipher_TLS_DH_anon_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05B
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05C
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05D
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC05E
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC05F
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC060
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC061
+ cipher_TLS_ECDH_RSA_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC062
+ cipher_TLS_ECDH_RSA_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC063
+ cipher_TLS_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC064
+ cipher_TLS_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC065
+ cipher_TLS_DHE_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC066
+ cipher_TLS_DHE_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC067
+ cipher_TLS_RSA_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC068
+ cipher_TLS_RSA_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC069
+ cipher_TLS_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06A
+ cipher_TLS_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06B
+ cipher_TLS_DHE_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06C
+ cipher_TLS_DHE_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06D
+ cipher_TLS_RSA_PSK_WITH_ARIA_128_GCM_SHA256 uint16 = 0xC06E
+ cipher_TLS_RSA_PSK_WITH_ARIA_256_GCM_SHA384 uint16 = 0xC06F
+ cipher_TLS_ECDHE_PSK_WITH_ARIA_128_CBC_SHA256 uint16 = 0xC070
+ cipher_TLS_ECDHE_PSK_WITH_ARIA_256_CBC_SHA384 uint16 = 0xC071
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC072
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC073
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC074
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC075
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC076
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC077
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC078
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC079
+ cipher_TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07A
+ cipher_TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07B
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07C
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07D
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC07E
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC07F
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC080
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC081
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC082
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC083
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC084
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC085
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC086
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC087
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC088
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC089
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08A
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08B
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08C
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08D
+ cipher_TLS_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC08E
+ cipher_TLS_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC08F
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC090
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC091
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_GCM_SHA256 uint16 = 0xC092
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_GCM_SHA384 uint16 = 0xC093
+ cipher_TLS_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC094
+ cipher_TLS_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC095
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC096
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC097
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC098
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC099
+ cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256 uint16 = 0xC09A
+ cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384 uint16 = 0xC09B
+ cipher_TLS_RSA_WITH_AES_128_CCM uint16 = 0xC09C
+ cipher_TLS_RSA_WITH_AES_256_CCM uint16 = 0xC09D
+ cipher_TLS_DHE_RSA_WITH_AES_128_CCM uint16 = 0xC09E
+ cipher_TLS_DHE_RSA_WITH_AES_256_CCM uint16 = 0xC09F
+ cipher_TLS_RSA_WITH_AES_128_CCM_8 uint16 = 0xC0A0
+ cipher_TLS_RSA_WITH_AES_256_CCM_8 uint16 = 0xC0A1
+ cipher_TLS_DHE_RSA_WITH_AES_128_CCM_8 uint16 = 0xC0A2
+ cipher_TLS_DHE_RSA_WITH_AES_256_CCM_8 uint16 = 0xC0A3
+ cipher_TLS_PSK_WITH_AES_128_CCM uint16 = 0xC0A4
+ cipher_TLS_PSK_WITH_AES_256_CCM uint16 = 0xC0A5
+ cipher_TLS_DHE_PSK_WITH_AES_128_CCM uint16 = 0xC0A6
+ cipher_TLS_DHE_PSK_WITH_AES_256_CCM uint16 = 0xC0A7
+ cipher_TLS_PSK_WITH_AES_128_CCM_8 uint16 = 0xC0A8
+ cipher_TLS_PSK_WITH_AES_256_CCM_8 uint16 = 0xC0A9
+ cipher_TLS_PSK_DHE_WITH_AES_128_CCM_8 uint16 = 0xC0AA
+ cipher_TLS_PSK_DHE_WITH_AES_256_CCM_8 uint16 = 0xC0AB
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CCM uint16 = 0xC0AC
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CCM uint16 = 0xC0AD
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 uint16 = 0xC0AE
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 uint16 = 0xC0AF
+ // Unassigned uint16 = 0xC0B0-FF
+ // Unassigned uint16 = 0xC1-CB,*
+ // Unassigned uint16 = 0xCC00-A7
+ cipher_TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCA8
+ cipher_TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCA9
+ cipher_TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAA
+ cipher_TLS_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAB
+ cipher_TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAC
+ cipher_TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAD
+ cipher_TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256 uint16 = 0xCCAE
+)
+
+// isBadCipher reports whether the cipher is blacklisted by the HTTP/2 spec.
+// References:
+// https://tools.ietf.org/html/rfc7540#appendix-A
+// Reject cipher suites from Appendix A.
+// "This list includes those cipher suites that do not
+// offer an ephemeral key exchange and those that are
+// based on the TLS null, stream or block cipher type"
+func isBadCipher(cipher uint16) bool {
+ switch cipher {
+ case cipher_TLS_NULL_WITH_NULL_NULL,
+ cipher_TLS_RSA_WITH_NULL_MD5,
+ cipher_TLS_RSA_WITH_NULL_SHA,
+ cipher_TLS_RSA_EXPORT_WITH_RC4_40_MD5,
+ cipher_TLS_RSA_WITH_RC4_128_MD5,
+ cipher_TLS_RSA_WITH_RC4_128_SHA,
+ cipher_TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5,
+ cipher_TLS_RSA_WITH_IDEA_CBC_SHA,
+ cipher_TLS_RSA_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_RSA_WITH_DES_CBC_SHA,
+ cipher_TLS_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DH_DSS_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_DES_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DH_RSA_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_DES_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_DES_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_DES_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DH_anon_EXPORT_WITH_RC4_40_MD5,
+ cipher_TLS_DH_anon_WITH_RC4_128_MD5,
+ cipher_TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_DES_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_KRB5_WITH_DES_CBC_SHA,
+ cipher_TLS_KRB5_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_KRB5_WITH_RC4_128_SHA,
+ cipher_TLS_KRB5_WITH_IDEA_CBC_SHA,
+ cipher_TLS_KRB5_WITH_DES_CBC_MD5,
+ cipher_TLS_KRB5_WITH_3DES_EDE_CBC_MD5,
+ cipher_TLS_KRB5_WITH_RC4_128_MD5,
+ cipher_TLS_KRB5_WITH_IDEA_CBC_MD5,
+ cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA,
+ cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_SHA,
+ cipher_TLS_KRB5_EXPORT_WITH_RC4_40_SHA,
+ cipher_TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5,
+ cipher_TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5,
+ cipher_TLS_KRB5_EXPORT_WITH_RC4_40_MD5,
+ cipher_TLS_PSK_WITH_NULL_SHA,
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA,
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA,
+ cipher_TLS_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA,
+ cipher_TLS_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA,
+ cipher_TLS_RSA_WITH_NULL_SHA256,
+ cipher_TLS_RSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_RSA_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_DH_DSS_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_DH_RSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_DH_DSS_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_DH_RSA_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_DH_anon_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_DH_anon_WITH_AES_256_CBC_SHA256,
+ cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA,
+ cipher_TLS_PSK_WITH_RC4_128_SHA,
+ cipher_TLS_PSK_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_PSK_WITH_AES_128_CBC_SHA,
+ cipher_TLS_PSK_WITH_AES_256_CBC_SHA,
+ cipher_TLS_DHE_PSK_WITH_RC4_128_SHA,
+ cipher_TLS_DHE_PSK_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA,
+ cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA,
+ cipher_TLS_RSA_PSK_WITH_RC4_128_SHA,
+ cipher_TLS_RSA_PSK_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA,
+ cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA,
+ cipher_TLS_RSA_WITH_SEED_CBC_SHA,
+ cipher_TLS_DH_DSS_WITH_SEED_CBC_SHA,
+ cipher_TLS_DH_RSA_WITH_SEED_CBC_SHA,
+ cipher_TLS_DHE_DSS_WITH_SEED_CBC_SHA,
+ cipher_TLS_DHE_RSA_WITH_SEED_CBC_SHA,
+ cipher_TLS_DH_anon_WITH_SEED_CBC_SHA,
+ cipher_TLS_RSA_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_RSA_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_DH_RSA_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_DH_RSA_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_DH_DSS_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_DH_DSS_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_DH_anon_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_DH_anon_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_PSK_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_PSK_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_RSA_PSK_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_RSA_PSK_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_PSK_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_PSK_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_PSK_WITH_NULL_SHA256,
+ cipher_TLS_PSK_WITH_NULL_SHA384,
+ cipher_TLS_DHE_PSK_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_DHE_PSK_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA256,
+ cipher_TLS_DHE_PSK_WITH_NULL_SHA384,
+ cipher_TLS_RSA_PSK_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_RSA_PSK_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA256,
+ cipher_TLS_RSA_PSK_WITH_NULL_SHA384,
+ cipher_TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256,
+ cipher_TLS_EMPTY_RENEGOTIATION_INFO_SCSV,
+ cipher_TLS_ECDH_ECDSA_WITH_NULL_SHA,
+ cipher_TLS_ECDH_ECDSA_WITH_RC4_128_SHA,
+ cipher_TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_NULL_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDH_RSA_WITH_NULL_SHA,
+ cipher_TLS_ECDH_RSA_WITH_RC4_128_SHA,
+ cipher_TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDHE_RSA_WITH_NULL_SHA,
+ cipher_TLS_ECDHE_RSA_WITH_RC4_128_SHA,
+ cipher_TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDH_anon_WITH_NULL_SHA,
+ cipher_TLS_ECDH_anon_WITH_RC4_128_SHA,
+ cipher_TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDH_anon_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDH_anon_WITH_AES_256_CBC_SHA,
+ cipher_TLS_SRP_SHA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_SRP_SHA_RSA_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_SRP_SHA_DSS_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_SRP_SHA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_SRP_SHA_RSA_WITH_AES_128_CBC_SHA,
+ cipher_TLS_SRP_SHA_DSS_WITH_AES_128_CBC_SHA,
+ cipher_TLS_SRP_SHA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_SRP_SHA_RSA_WITH_AES_256_CBC_SHA,
+ cipher_TLS_SRP_SHA_DSS_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384,
+ cipher_TLS_ECDHE_PSK_WITH_RC4_128_SHA,
+ cipher_TLS_ECDHE_PSK_WITH_3DES_EDE_CBC_SHA,
+ cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA,
+ cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA,
+ cipher_TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256,
+ cipher_TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384,
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA,
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA256,
+ cipher_TLS_ECDHE_PSK_WITH_NULL_SHA384,
+ cipher_TLS_RSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_RSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DH_DSS_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DH_DSS_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DH_RSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DH_RSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DHE_DSS_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DHE_DSS_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DHE_RSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DHE_RSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DH_anon_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DH_anon_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_ECDSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_RSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_RSA_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_RSA_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_DH_RSA_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_DH_RSA_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_DH_DSS_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_DH_DSS_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_DH_anon_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_DH_anon_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_PSK_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_PSK_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_DHE_PSK_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_DHE_PSK_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_RSA_PSK_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_RSA_PSK_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_PSK_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_PSK_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_RSA_PSK_WITH_ARIA_128_GCM_SHA256,
+ cipher_TLS_RSA_PSK_WITH_ARIA_256_GCM_SHA384,
+ cipher_TLS_ECDHE_PSK_WITH_ARIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_PSK_WITH_ARIA_256_CBC_SHA384,
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_RSA_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_RSA_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_DH_RSA_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_DH_DSS_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_DH_anon_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_ECDH_ECDSA_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_ECDH_RSA_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_PSK_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_PSK_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_GCM_SHA256,
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_GCM_SHA384,
+ cipher_TLS_PSK_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_PSK_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_DHE_PSK_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_RSA_PSK_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_128_CBC_SHA256,
+ cipher_TLS_ECDHE_PSK_WITH_CAMELLIA_256_CBC_SHA384,
+ cipher_TLS_RSA_WITH_AES_128_CCM,
+ cipher_TLS_RSA_WITH_AES_256_CCM,
+ cipher_TLS_RSA_WITH_AES_128_CCM_8,
+ cipher_TLS_RSA_WITH_AES_256_CCM_8,
+ cipher_TLS_PSK_WITH_AES_128_CCM,
+ cipher_TLS_PSK_WITH_AES_256_CCM,
+ cipher_TLS_PSK_WITH_AES_128_CCM_8,
+ cipher_TLS_PSK_WITH_AES_256_CCM_8:
+ return true
+ default:
+ return false
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/client_conn_pool.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/client_conn_pool.go
index b1394125..bdf5652b 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/client_conn_pool.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/client_conn_pool.go
@@ -247,7 +247,7 @@ func filterOutClientConn(in []*ClientConn, exclude *ClientConn) []*ClientConn {
}
// noDialClientConnPool is an implementation of http2.ClientConnPool
-// which never dials. We let the HTTP/1.1 client dial and use its TLS
+// which never dials. We let the HTTP/1.1 client dial and use its TLS
// connection instead.
type noDialClientConnPool struct{ *clientConnPool }
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/configure_transport.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/configure_transport.go
index 4f720f53..b65fc6d4 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/configure_transport.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/configure_transport.go
@@ -56,7 +56,7 @@ func configureTransport(t1 *http.Transport) (*Transport, error) {
}
// registerHTTPSProtocol calls Transport.RegisterProtocol but
-// convering panics into errors.
+// converting panics into errors.
func registerHTTPSProtocol(t *http.Transport, rt http.RoundTripper) (err error) {
defer func() {
if e := recover(); e != nil {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/databuffer.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/databuffer.go
new file mode 100644
index 00000000..a3067f8d
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/databuffer.go
@@ -0,0 +1,146 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package http2
+
+import (
+ "errors"
+ "fmt"
+ "sync"
+)
+
+// Buffer chunks are allocated from a pool to reduce pressure on GC.
+// The maximum wasted space per dataBuffer is 2x the largest size class,
+// which happens when the dataBuffer has multiple chunks and there is
+// one unread byte in both the first and last chunks. We use a few size
+// classes to minimize overheads for servers that typically receive very
+// small request bodies.
+//
+// TODO: Benchmark to determine if the pools are necessary. The GC may have
+// improved enough that we can instead allocate chunks like this:
+// make([]byte, max(16<<10, expectedBytesRemaining))
+var (
+ dataChunkSizeClasses = []int{
+ 1 << 10,
+ 2 << 10,
+ 4 << 10,
+ 8 << 10,
+ 16 << 10,
+ }
+ dataChunkPools = [...]sync.Pool{
+ {New: func() interface{} { return make([]byte, 1<<10) }},
+ {New: func() interface{} { return make([]byte, 2<<10) }},
+ {New: func() interface{} { return make([]byte, 4<<10) }},
+ {New: func() interface{} { return make([]byte, 8<<10) }},
+ {New: func() interface{} { return make([]byte, 16<<10) }},
+ }
+)
+
+func getDataBufferChunk(size int64) []byte {
+ i := 0
+ for ; i < len(dataChunkSizeClasses)-1; i++ {
+ if size <= int64(dataChunkSizeClasses[i]) {
+ break
+ }
+ }
+ return dataChunkPools[i].Get().([]byte)
+}
+
+func putDataBufferChunk(p []byte) {
+ for i, n := range dataChunkSizeClasses {
+ if len(p) == n {
+ dataChunkPools[i].Put(p)
+ return
+ }
+ }
+ panic(fmt.Sprintf("unexpected buffer len=%v", len(p)))
+}
+
+// dataBuffer is an io.ReadWriter backed by a list of data chunks.
+// Each dataBuffer is used to read DATA frames on a single stream.
+// The buffer is divided into chunks so the server can limit the
+// total memory used by a single connection without limiting the
+// request body size on any single stream.
+type dataBuffer struct {
+ chunks [][]byte
+ r int // next byte to read is chunks[0][r]
+ w int // next byte to write is chunks[len(chunks)-1][w]
+ size int // total buffered bytes
+ expected int64 // we expect at least this many bytes in future Write calls (ignored if <= 0)
+}
+
+var errReadEmpty = errors.New("read from empty dataBuffer")
+
+// Read copies bytes from the buffer into p.
+// It is an error to read when no data is available.
+func (b *dataBuffer) Read(p []byte) (int, error) {
+ if b.size == 0 {
+ return 0, errReadEmpty
+ }
+ var ntotal int
+ for len(p) > 0 && b.size > 0 {
+ readFrom := b.bytesFromFirstChunk()
+ n := copy(p, readFrom)
+ p = p[n:]
+ ntotal += n
+ b.r += n
+ b.size -= n
+ // If the first chunk has been consumed, advance to the next chunk.
+ if b.r == len(b.chunks[0]) {
+ putDataBufferChunk(b.chunks[0])
+ end := len(b.chunks) - 1
+ copy(b.chunks[:end], b.chunks[1:])
+ b.chunks[end] = nil
+ b.chunks = b.chunks[:end]
+ b.r = 0
+ }
+ }
+ return ntotal, nil
+}
+
+func (b *dataBuffer) bytesFromFirstChunk() []byte {
+ if len(b.chunks) == 1 {
+ return b.chunks[0][b.r:b.w]
+ }
+ return b.chunks[0][b.r:]
+}
+
+// Len returns the number of bytes of the unread portion of the buffer.
+func (b *dataBuffer) Len() int {
+ return b.size
+}
+
+// Write appends p to the buffer.
+func (b *dataBuffer) Write(p []byte) (int, error) {
+ ntotal := len(p)
+ for len(p) > 0 {
+ // If the last chunk is empty, allocate a new chunk. Try to allocate
+ // enough to fully copy p plus any additional bytes we expect to
+ // receive. However, this may allocate less than len(p).
+ want := int64(len(p))
+ if b.expected > want {
+ want = b.expected
+ }
+ chunk := b.lastChunkOrAlloc(want)
+ n := copy(chunk[b.w:], p)
+ p = p[n:]
+ b.w += n
+ b.size += n
+ b.expected -= int64(n)
+ }
+ return ntotal, nil
+}
+
+func (b *dataBuffer) lastChunkOrAlloc(want int64) []byte {
+ if len(b.chunks) != 0 {
+ last := b.chunks[len(b.chunks)-1]
+ if b.w < len(last) {
+ return last
+ }
+ }
+ chunk := getDataBufferChunk(want)
+ b.chunks = append(b.chunks, chunk)
+ b.w = 0
+ return chunk
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/errors.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/errors.go
index 20fd7626..71f2c463 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/errors.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/errors.go
@@ -87,13 +87,16 @@ type goAwayFlowError struct{}
func (goAwayFlowError) Error() string { return "connection exceeded flow control window size" }
-// connErrorReason wraps a ConnectionError with an informative error about why it occurs.
-
+// connError represents an HTTP/2 ConnectionError error code, along
+// with a string (for debugging) explaining why.
+//
// Errors of this type are only returned by the frame parser functions
-// and converted into ConnectionError(ErrCodeProtocol).
+// and converted into ConnectionError(Code), after stashing away
+// the Reason into the Framer's errDetail field, accessible via
+// the (*Framer).ErrorDetail method.
type connError struct {
- Code ErrCode
- Reason string
+ Code ErrCode // the ConnectionError error code
+ Reason string // additional reason
}
func (e connError) Error() string {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/fixed_buffer.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/fixed_buffer.go
deleted file mode 100644
index 47da0f0b..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/fixed_buffer.go
+++ /dev/null
@@ -1,60 +0,0 @@
-// Copyright 2014 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package http2
-
-import (
- "errors"
-)
-
-// fixedBuffer is an io.ReadWriter backed by a fixed size buffer.
-// It never allocates, but moves old data as new data is written.
-type fixedBuffer struct {
- buf []byte
- r, w int
-}
-
-var (
- errReadEmpty = errors.New("read from empty fixedBuffer")
- errWriteFull = errors.New("write on full fixedBuffer")
-)
-
-// Read copies bytes from the buffer into p.
-// It is an error to read when no data is available.
-func (b *fixedBuffer) Read(p []byte) (n int, err error) {
- if b.r == b.w {
- return 0, errReadEmpty
- }
- n = copy(p, b.buf[b.r:b.w])
- b.r += n
- if b.r == b.w {
- b.r = 0
- b.w = 0
- }
- return n, nil
-}
-
-// Len returns the number of bytes of the unread portion of the buffer.
-func (b *fixedBuffer) Len() int {
- return b.w - b.r
-}
-
-// Write copies bytes from p into the buffer.
-// It is an error to write more data than the buffer can hold.
-func (b *fixedBuffer) Write(p []byte) (n int, err error) {
- // Slide existing data to beginning.
- if b.r > 0 && len(p) > len(b.buf)-b.w {
- copy(b.buf, b.buf[b.r:b.w])
- b.w -= b.r
- b.r = 0
- }
-
- // Write new data.
- n = copy(b.buf[b.w:], p)
- b.w += n
- if n < len(p) {
- err = errWriteFull
- }
- return n, err
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/frame.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/frame.go
index 358833fe..3b148907 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/frame.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/frame.go
@@ -122,7 +122,7 @@ var flagName = map[FrameType]map[Flags]string{
// a frameParser parses a frame given its FrameHeader and payload
// bytes. The length of payload will always equal fh.Length (which
// might be 0).
-type frameParser func(fh FrameHeader, payload []byte) (Frame, error)
+type frameParser func(fc *frameCache, fh FrameHeader, payload []byte) (Frame, error)
var frameParsers = map[FrameType]frameParser{
FrameData: parseDataFrame,
@@ -312,7 +312,7 @@ type Framer struct {
MaxHeaderListSize uint32
// TODO: track which type of frame & with which flags was sent
- // last. Then return an error (unless AllowIllegalWrites) if
+ // last. Then return an error (unless AllowIllegalWrites) if
// we're in the middle of a header block and a
// non-Continuation or Continuation on a different stream is
// attempted to be written.
@@ -323,6 +323,8 @@ type Framer struct {
debugFramerBuf *bytes.Buffer
debugReadLoggerf func(string, ...interface{})
debugWriteLoggerf func(string, ...interface{})
+
+ frameCache *frameCache // nil if frames aren't reused (default)
}
func (fr *Framer) maxHeaderListSize() uint32 {
@@ -398,6 +400,27 @@ const (
maxFrameSize = 1<<24 - 1
)
+// SetReuseFrames allows the Framer to reuse Frames.
+// If called on a Framer, Frames returned by calls to ReadFrame are only
+// valid until the next call to ReadFrame.
+func (fr *Framer) SetReuseFrames() {
+ if fr.frameCache != nil {
+ return
+ }
+ fr.frameCache = &frameCache{}
+}
+
+type frameCache struct {
+ dataFrame DataFrame
+}
+
+func (fc *frameCache) getDataFrame() *DataFrame {
+ if fc == nil {
+ return &DataFrame{}
+ }
+ return &fc.dataFrame
+}
+
// NewFramer returns a Framer that writes frames to w and reads them from r.
func NewFramer(w io.Writer, r io.Reader) *Framer {
fr := &Framer{
@@ -477,7 +500,7 @@ func (fr *Framer) ReadFrame() (Frame, error) {
if _, err := io.ReadFull(fr.r, payload); err != nil {
return nil, err
}
- f, err := typeFrameParser(fh.Type)(fh, payload)
+ f, err := typeFrameParser(fh.Type)(fr.frameCache, fh, payload)
if err != nil {
if ce, ok := err.(connError); ok {
return nil, fr.connError(ce.Code, ce.Reason)
@@ -565,7 +588,7 @@ func (f *DataFrame) Data() []byte {
return f.data
}
-func parseDataFrame(fh FrameHeader, payload []byte) (Frame, error) {
+func parseDataFrame(fc *frameCache, fh FrameHeader, payload []byte) (Frame, error) {
if fh.StreamID == 0 {
// DATA frames MUST be associated with a stream. If a
// DATA frame is received whose stream identifier
@@ -574,9 +597,9 @@ func parseDataFrame(fh FrameHeader, payload []byte) (Frame, error) {
// PROTOCOL_ERROR.
return nil, connError{ErrCodeProtocol, "DATA frame with stream ID 0"}
}
- f := &DataFrame{
- FrameHeader: fh,
- }
+ f := fc.getDataFrame()
+ f.FrameHeader = fh
+
var padSize byte
if fh.Flags.Has(FlagDataPadded) {
var err error
@@ -600,6 +623,7 @@ var (
errStreamID = errors.New("invalid stream ID")
errDepStreamID = errors.New("invalid dependent stream ID")
errPadLength = errors.New("pad length too large")
+ errPadBytes = errors.New("padding bytes must all be zeros unless AllowIllegalWrites is enabled")
)
func validStreamIDOrZero(streamID uint32) bool {
@@ -623,6 +647,7 @@ func (f *Framer) WriteData(streamID uint32, endStream bool, data []byte) error {
//
// If pad is nil, the padding bit is not sent.
// The length of pad must not exceed 255 bytes.
+// The bytes of pad must all be zero, unless f.AllowIllegalWrites is set.
//
// It will perform exactly one Write to the underlying Writer.
// It is the caller's responsibility not to violate the maximum frame size
@@ -631,8 +656,18 @@ func (f *Framer) WriteDataPadded(streamID uint32, endStream bool, data, pad []by
if !validStreamID(streamID) && !f.AllowIllegalWrites {
return errStreamID
}
- if len(pad) > 255 {
- return errPadLength
+ if len(pad) > 0 {
+ if len(pad) > 255 {
+ return errPadLength
+ }
+ if !f.AllowIllegalWrites {
+ for _, b := range pad {
+ if b != 0 {
+ // "Padding octets MUST be set to zero when sending."
+ return errPadBytes
+ }
+ }
+ }
}
var flags Flags
if endStream {
@@ -660,10 +695,10 @@ type SettingsFrame struct {
p []byte
}
-func parseSettingsFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseSettingsFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
if fh.Flags.Has(FlagSettingsAck) && fh.Length > 0 {
// When this (ACK 0x1) bit is set, the payload of the
- // SETTINGS frame MUST be empty. Receipt of a
+ // SETTINGS frame MUST be empty. Receipt of a
// SETTINGS frame with the ACK flag set and a length
// field value other than 0 MUST be treated as a
// connection error (Section 5.4.1) of type
@@ -672,7 +707,7 @@ func parseSettingsFrame(fh FrameHeader, p []byte) (Frame, error) {
}
if fh.StreamID != 0 {
// SETTINGS frames always apply to a connection,
- // never a single stream. The stream identifier for a
+ // never a single stream. The stream identifier for a
// SETTINGS frame MUST be zero (0x0). If an endpoint
// receives a SETTINGS frame whose stream identifier
// field is anything other than 0x0, the endpoint MUST
@@ -762,7 +797,7 @@ type PingFrame struct {
func (f *PingFrame) IsAck() bool { return f.Flags.Has(FlagPingAck) }
-func parsePingFrame(fh FrameHeader, payload []byte) (Frame, error) {
+func parsePingFrame(_ *frameCache, fh FrameHeader, payload []byte) (Frame, error) {
if len(payload) != 8 {
return nil, ConnectionError(ErrCodeFrameSize)
}
@@ -802,7 +837,7 @@ func (f *GoAwayFrame) DebugData() []byte {
return f.debugData
}
-func parseGoAwayFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseGoAwayFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
if fh.StreamID != 0 {
return nil, ConnectionError(ErrCodeProtocol)
}
@@ -842,7 +877,7 @@ func (f *UnknownFrame) Payload() []byte {
return f.p
}
-func parseUnknownFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseUnknownFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
return &UnknownFrame{fh, p}, nil
}
@@ -853,7 +888,7 @@ type WindowUpdateFrame struct {
Increment uint32 // never read with high bit set
}
-func parseWindowUpdateFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseWindowUpdateFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
if len(p) != 4 {
return nil, ConnectionError(ErrCodeFrameSize)
}
@@ -918,12 +953,12 @@ func (f *HeadersFrame) HasPriority() bool {
return f.FrameHeader.Flags.Has(FlagHeadersPriority)
}
-func parseHeadersFrame(fh FrameHeader, p []byte) (_ Frame, err error) {
+func parseHeadersFrame(_ *frameCache, fh FrameHeader, p []byte) (_ Frame, err error) {
hf := &HeadersFrame{
FrameHeader: fh,
}
if fh.StreamID == 0 {
- // HEADERS frames MUST be associated with a stream. If a HEADERS frame
+ // HEADERS frames MUST be associated with a stream. If a HEADERS frame
// is received whose stream identifier field is 0x0, the recipient MUST
// respond with a connection error (Section 5.4.1) of type
// PROTOCOL_ERROR.
@@ -1045,7 +1080,7 @@ type PriorityParam struct {
Exclusive bool
// Weight is the stream's zero-indexed weight. It should be
- // set together with StreamDep, or neither should be set. Per
+ // set together with StreamDep, or neither should be set. Per
// the spec, "Add one to the value to obtain a weight between
// 1 and 256."
Weight uint8
@@ -1055,7 +1090,7 @@ func (p PriorityParam) IsZero() bool {
return p == PriorityParam{}
}
-func parsePriorityFrame(fh FrameHeader, payload []byte) (Frame, error) {
+func parsePriorityFrame(_ *frameCache, fh FrameHeader, payload []byte) (Frame, error) {
if fh.StreamID == 0 {
return nil, connError{ErrCodeProtocol, "PRIORITY frame with stream ID 0"}
}
@@ -1102,7 +1137,7 @@ type RSTStreamFrame struct {
ErrCode ErrCode
}
-func parseRSTStreamFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseRSTStreamFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
if len(p) != 4 {
return nil, ConnectionError(ErrCodeFrameSize)
}
@@ -1132,7 +1167,7 @@ type ContinuationFrame struct {
headerFragBuf []byte
}
-func parseContinuationFrame(fh FrameHeader, p []byte) (Frame, error) {
+func parseContinuationFrame(_ *frameCache, fh FrameHeader, p []byte) (Frame, error) {
if fh.StreamID == 0 {
return nil, connError{ErrCodeProtocol, "CONTINUATION frame with stream ID 0"}
}
@@ -1182,7 +1217,7 @@ func (f *PushPromiseFrame) HeadersEnded() bool {
return f.FrameHeader.Flags.Has(FlagPushPromiseEndHeaders)
}
-func parsePushPromise(fh FrameHeader, p []byte) (_ Frame, err error) {
+func parsePushPromise(_ *frameCache, fh FrameHeader, p []byte) (_ Frame, err error) {
pp := &PushPromiseFrame{
FrameHeader: fh,
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go16.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go16.go
index 2b72855f..00b2e9e3 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go16.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go16.go
@@ -7,7 +7,6 @@
package http2
import (
- "crypto/tls"
"net/http"
"time"
)
@@ -15,29 +14,3 @@ import (
func transportExpectContinueTimeout(t1 *http.Transport) time.Duration {
return t1.ExpectContinueTimeout
}
-
-// isBadCipher reports whether the cipher is blacklisted by the HTTP/2 spec.
-func isBadCipher(cipher uint16) bool {
- switch cipher {
- case tls.TLS_RSA_WITH_RC4_128_SHA,
- tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_RSA_WITH_AES_256_CBC_SHA,
- tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
- tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
- tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:
- // Reject cipher suites from Appendix A.
- // "This list includes those cipher suites that do not
- // offer an ephemeral key exchange and those that are
- // based on the TLS null, stream or block cipher type"
- return true
- default:
- return false
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go18.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go18.go
index 633202c3..4f30d228 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go18.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go18.go
@@ -12,7 +12,11 @@ import (
"net/http"
)
-func cloneTLSConfig(c *tls.Config) *tls.Config { return c.Clone() }
+func cloneTLSConfig(c *tls.Config) *tls.Config {
+ c2 := c.Clone()
+ c2.GetClientCertificate = c.GetClientCertificate // golang.org/issue/19264
+ return c2
+}
var _ http.Pusher = (*responseWriter)(nil)
@@ -48,3 +52,5 @@ func reqGetBody(req *http.Request) func() (io.ReadCloser, error) {
func reqBodyIsNoBody(body io.ReadCloser) bool {
return body == http.NoBody
}
+
+func go18httpNoBody() io.ReadCloser { return http.NoBody } // for tests only
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go19.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go19.go
new file mode 100644
index 00000000..38124ba5
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/go19.go
@@ -0,0 +1,16 @@
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build go1.9
+
+package http2
+
+import (
+ "net/http"
+)
+
+func configureServer19(s *http.Server, conf *Server) error {
+ s.RegisterOnShutdown(conf.state.startGracefulShutdown)
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/encode.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/encode.go
index f9bb0339..54726c2a 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/encode.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/encode.go
@@ -39,13 +39,14 @@ func NewEncoder(w io.Writer) *Encoder {
tableSizeUpdate: false,
w: w,
}
+ e.dynTab.table.init()
e.dynTab.setMaxSize(initialHeaderTableSize)
return e
}
// WriteField encodes f into a single Write to e's underlying Writer.
// This function may also produce bytes for "Header Table Size Update"
-// if necessary. If produced, it is done before encoding f.
+// if necessary. If produced, it is done before encoding f.
func (e *Encoder) WriteField(f HeaderField) error {
e.buf = e.buf[:0]
@@ -88,29 +89,17 @@ func (e *Encoder) WriteField(f HeaderField) error {
// only name matches, i points to that index and nameValueMatch
// becomes false.
func (e *Encoder) searchTable(f HeaderField) (i uint64, nameValueMatch bool) {
- for idx, hf := range staticTable {
- if !constantTimeStringCompare(hf.Name, f.Name) {
- continue
- }
- if i == 0 {
- i = uint64(idx + 1)
- }
- if f.Sensitive {
- continue
- }
- if !constantTimeStringCompare(hf.Value, f.Value) {
- continue
- }
- i = uint64(idx + 1)
- nameValueMatch = true
- return
+ i, nameValueMatch = staticTable.search(f)
+ if nameValueMatch {
+ return i, true
}
- j, nameValueMatch := e.dynTab.search(f)
+ j, nameValueMatch := e.dynTab.table.search(f)
if nameValueMatch || (i == 0 && j != 0) {
- i = j + uint64(len(staticTable))
+ return j + uint64(staticTable.len()), nameValueMatch
}
- return
+
+ return i, false
}
// SetMaxDynamicTableSize changes the dynamic header table size to v.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/hpack.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/hpack.go
index 135b9f62..176644ac 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/hpack.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/hpack.go
@@ -61,7 +61,7 @@ func (hf HeaderField) String() string {
func (hf HeaderField) Size() uint32 {
// http://http2.github.io/http2-spec/compression.html#rfc.section.4.1
// "The size of the dynamic table is the sum of the size of
- // its entries. The size of an entry is the sum of its name's
+ // its entries. The size of an entry is the sum of its name's
// length in octets (as defined in Section 5.2), its value's
// length in octets (see Section 5.2), plus 32. The size of
// an entry is calculated using the length of the name and
@@ -102,6 +102,7 @@ func NewDecoder(maxDynamicTableSize uint32, emitFunc func(f HeaderField)) *Decod
emit: emitFunc,
emitEnabled: true,
}
+ d.dynTab.table.init()
d.dynTab.allowedMaxSize = maxDynamicTableSize
d.dynTab.setMaxSize(maxDynamicTableSize)
return d
@@ -154,12 +155,9 @@ func (d *Decoder) SetAllowedMaxDynamicTableSize(v uint32) {
}
type dynamicTable struct {
- // ents is the FIFO described at
// http://http2.github.io/http2-spec/compression.html#rfc.section.2.3.2
- // The newest (low index) is append at the end, and items are
- // evicted from the front.
- ents []HeaderField
- size uint32
+ table headerFieldTable
+ size uint32 // in bytes
maxSize uint32 // current maxSize
allowedMaxSize uint32 // maxSize may go up to this, inclusive
}
@@ -169,95 +167,45 @@ func (dt *dynamicTable) setMaxSize(v uint32) {
dt.evict()
}
-// TODO: change dynamicTable to be a struct with a slice and a size int field,
-// per http://http2.github.io/http2-spec/compression.html#rfc.section.4.1:
-//
-//
-// Then make add increment the size. maybe the max size should move from Decoder to
-// dynamicTable and add should return an ok bool if there was enough space.
-//
-// Later we'll need a remove operation on dynamicTable.
-
func (dt *dynamicTable) add(f HeaderField) {
- dt.ents = append(dt.ents, f)
+ dt.table.addEntry(f)
dt.size += f.Size()
dt.evict()
}
-// If we're too big, evict old stuff (front of the slice)
+// If we're too big, evict old stuff.
func (dt *dynamicTable) evict() {
- base := dt.ents // keep base pointer of slice
- for dt.size > dt.maxSize {
- dt.size -= dt.ents[0].Size()
- dt.ents = dt.ents[1:]
- }
-
- // Shift slice contents down if we evicted things.
- if len(dt.ents) != len(base) {
- copy(base, dt.ents)
- dt.ents = base[:len(dt.ents)]
+ var n int
+ for dt.size > dt.maxSize && n < dt.table.len() {
+ dt.size -= dt.table.ents[n].Size()
+ n++
}
-}
-
-// constantTimeStringCompare compares string a and b in a constant
-// time manner.
-func constantTimeStringCompare(a, b string) bool {
- if len(a) != len(b) {
- return false
- }
-
- c := byte(0)
-
- for i := 0; i < len(a); i++ {
- c |= a[i] ^ b[i]
- }
-
- return c == 0
-}
-
-// Search searches f in the table. The return value i is 0 if there is
-// no name match. If there is name match or name/value match, i is the
-// index of that entry (1-based). If both name and value match,
-// nameValueMatch becomes true.
-func (dt *dynamicTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
- l := len(dt.ents)
- for j := l - 1; j >= 0; j-- {
- ent := dt.ents[j]
- if !constantTimeStringCompare(ent.Name, f.Name) {
- continue
- }
- if i == 0 {
- i = uint64(l - j)
- }
- if f.Sensitive {
- continue
- }
- if !constantTimeStringCompare(ent.Value, f.Value) {
- continue
- }
- i = uint64(l - j)
- nameValueMatch = true
- return
- }
- return
+ dt.table.evictOldest(n)
}
func (d *Decoder) maxTableIndex() int {
- return len(d.dynTab.ents) + len(staticTable)
+ // This should never overflow. RFC 7540 Section 6.5.2 limits the size of
+ // the dynamic table to 2^32 bytes, where each entry will occupy more than
+ // one byte. Further, the staticTable has a fixed, small length.
+ return d.dynTab.table.len() + staticTable.len()
}
func (d *Decoder) at(i uint64) (hf HeaderField, ok bool) {
- if i < 1 {
+ // See Section 2.3.3.
+ if i == 0 {
return
}
+ if i <= uint64(staticTable.len()) {
+ return staticTable.ents[i-1], true
+ }
if i > uint64(d.maxTableIndex()) {
return
}
- if i <= uint64(len(staticTable)) {
- return staticTable[i-1], true
- }
- dents := d.dynTab.ents
- return dents[len(dents)-(int(i)-len(staticTable))], true
+ // In the dynamic table, newer entries have lower indices.
+ // However, dt.ents[0] is the oldest entry. Hence, dt.ents is
+ // the reversed dynamic table.
+ dt := d.dynTab.table
+ return dt.ents[dt.len()-(int(i)-staticTable.len())], true
}
// Decode decodes an entire block.
@@ -307,7 +255,7 @@ func (d *Decoder) Write(p []byte) (n int, err error) {
err = d.parseHeaderFieldRepr()
if err == errNeedMore {
// Extra paranoia, making sure saveBuf won't
- // get too large. All the varint and string
+ // get too large. All the varint and string
// reading code earlier should already catch
// overlong things and return ErrStringLength,
// but keep this as a last resort.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/tables.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/tables.go
index b9283a02..a66cfbea 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/tables.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/hpack/tables.go
@@ -4,73 +4,200 @@
package hpack
-func pair(name, value string) HeaderField {
- return HeaderField{Name: name, Value: value}
+import (
+ "fmt"
+)
+
+// headerFieldTable implements a list of HeaderFields.
+// This is used to implement the static and dynamic tables.
+type headerFieldTable struct {
+ // For static tables, entries are never evicted.
+ //
+ // For dynamic tables, entries are evicted from ents[0] and added to the end.
+ // Each entry has a unique id that starts at one and increments for each
+ // entry that is added. This unique id is stable across evictions, meaning
+ // it can be used as a pointer to a specific entry. As in hpack, unique ids
+ // are 1-based. The unique id for ents[k] is k + evictCount + 1.
+ //
+ // Zero is not a valid unique id.
+ //
+ // evictCount should not overflow in any remotely practical situation. In
+ // practice, we will have one dynamic table per HTTP/2 connection. If we
+ // assume a very powerful server that handles 1M QPS per connection and each
+ // request adds (then evicts) 100 entries from the table, it would still take
+ // 2M years for evictCount to overflow.
+ ents []HeaderField
+ evictCount uint64
+
+ // byName maps a HeaderField name to the unique id of the newest entry with
+ // the same name. See above for a definition of "unique id".
+ byName map[string]uint64
+
+ // byNameValue maps a HeaderField name/value pair to the unique id of the newest
+ // entry with the same name and value. See above for a definition of "unique id".
+ byNameValue map[pairNameValue]uint64
+}
+
+type pairNameValue struct {
+ name, value string
+}
+
+func (t *headerFieldTable) init() {
+ t.byName = make(map[string]uint64)
+ t.byNameValue = make(map[pairNameValue]uint64)
+}
+
+// len reports the number of entries in the table.
+func (t *headerFieldTable) len() int {
+ return len(t.ents)
+}
+
+// addEntry adds a new entry.
+func (t *headerFieldTable) addEntry(f HeaderField) {
+ id := uint64(t.len()) + t.evictCount + 1
+ t.byName[f.Name] = id
+ t.byNameValue[pairNameValue{f.Name, f.Value}] = id
+ t.ents = append(t.ents, f)
+}
+
+// evictOldest evicts the n oldest entries in the table.
+func (t *headerFieldTable) evictOldest(n int) {
+ if n > t.len() {
+ panic(fmt.Sprintf("evictOldest(%v) on table with %v entries", n, t.len()))
+ }
+ for k := 0; k < n; k++ {
+ f := t.ents[k]
+ id := t.evictCount + uint64(k) + 1
+ if t.byName[f.Name] == id {
+ delete(t.byName, f.Name)
+ }
+ if p := (pairNameValue{f.Name, f.Value}); t.byNameValue[p] == id {
+ delete(t.byNameValue, p)
+ }
+ }
+ copy(t.ents, t.ents[n:])
+ for k := t.len() - n; k < t.len(); k++ {
+ t.ents[k] = HeaderField{} // so strings can be garbage collected
+ }
+ t.ents = t.ents[:t.len()-n]
+ if t.evictCount+uint64(n) < t.evictCount {
+ panic("evictCount overflow")
+ }
+ t.evictCount += uint64(n)
+}
+
+// search finds f in the table. If there is no match, i is 0.
+// If both name and value match, i is the matched index and nameValueMatch
+// becomes true. If only name matches, i points to that index and
+// nameValueMatch becomes false.
+//
+// The returned index is a 1-based HPACK index. For dynamic tables, HPACK says
+// that index 1 should be the newest entry, but t.ents[0] is the oldest entry,
+// meaning t.ents is reversed for dynamic tables. Hence, when t is a dynamic
+// table, the return value i actually refers to the entry t.ents[t.len()-i].
+//
+// All tables are assumed to be a dynamic tables except for the global
+// staticTable pointer.
+//
+// See Section 2.3.3.
+func (t *headerFieldTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
+ if !f.Sensitive {
+ if id := t.byNameValue[pairNameValue{f.Name, f.Value}]; id != 0 {
+ return t.idToIndex(id), true
+ }
+ }
+ if id := t.byName[f.Name]; id != 0 {
+ return t.idToIndex(id), false
+ }
+ return 0, false
+}
+
+// idToIndex converts a unique id to an HPACK index.
+// See Section 2.3.3.
+func (t *headerFieldTable) idToIndex(id uint64) uint64 {
+ if id <= t.evictCount {
+ panic(fmt.Sprintf("id (%v) <= evictCount (%v)", id, t.evictCount))
+ }
+ k := id - t.evictCount - 1 // convert id to an index t.ents[k]
+ if t != staticTable {
+ return uint64(t.len()) - k // dynamic table
+ }
+ return k + 1
}
// http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-07#appendix-B
-var staticTable = [...]HeaderField{
- pair(":authority", ""), // index 1 (1-based)
- pair(":method", "GET"),
- pair(":method", "POST"),
- pair(":path", "/"),
- pair(":path", "/index.html"),
- pair(":scheme", "http"),
- pair(":scheme", "https"),
- pair(":status", "200"),
- pair(":status", "204"),
- pair(":status", "206"),
- pair(":status", "304"),
- pair(":status", "400"),
- pair(":status", "404"),
- pair(":status", "500"),
- pair("accept-charset", ""),
- pair("accept-encoding", "gzip, deflate"),
- pair("accept-language", ""),
- pair("accept-ranges", ""),
- pair("accept", ""),
- pair("access-control-allow-origin", ""),
- pair("age", ""),
- pair("allow", ""),
- pair("authorization", ""),
- pair("cache-control", ""),
- pair("content-disposition", ""),
- pair("content-encoding", ""),
- pair("content-language", ""),
- pair("content-length", ""),
- pair("content-location", ""),
- pair("content-range", ""),
- pair("content-type", ""),
- pair("cookie", ""),
- pair("date", ""),
- pair("etag", ""),
- pair("expect", ""),
- pair("expires", ""),
- pair("from", ""),
- pair("host", ""),
- pair("if-match", ""),
- pair("if-modified-since", ""),
- pair("if-none-match", ""),
- pair("if-range", ""),
- pair("if-unmodified-since", ""),
- pair("last-modified", ""),
- pair("link", ""),
- pair("location", ""),
- pair("max-forwards", ""),
- pair("proxy-authenticate", ""),
- pair("proxy-authorization", ""),
- pair("range", ""),
- pair("referer", ""),
- pair("refresh", ""),
- pair("retry-after", ""),
- pair("server", ""),
- pair("set-cookie", ""),
- pair("strict-transport-security", ""),
- pair("transfer-encoding", ""),
- pair("user-agent", ""),
- pair("vary", ""),
- pair("via", ""),
- pair("www-authenticate", ""),
+var staticTable = newStaticTable()
+var staticTableEntries = [...]HeaderField{
+ {Name: ":authority"},
+ {Name: ":method", Value: "GET"},
+ {Name: ":method", Value: "POST"},
+ {Name: ":path", Value: "/"},
+ {Name: ":path", Value: "/index.html"},
+ {Name: ":scheme", Value: "http"},
+ {Name: ":scheme", Value: "https"},
+ {Name: ":status", Value: "200"},
+ {Name: ":status", Value: "204"},
+ {Name: ":status", Value: "206"},
+ {Name: ":status", Value: "304"},
+ {Name: ":status", Value: "400"},
+ {Name: ":status", Value: "404"},
+ {Name: ":status", Value: "500"},
+ {Name: "accept-charset"},
+ {Name: "accept-encoding", Value: "gzip, deflate"},
+ {Name: "accept-language"},
+ {Name: "accept-ranges"},
+ {Name: "accept"},
+ {Name: "access-control-allow-origin"},
+ {Name: "age"},
+ {Name: "allow"},
+ {Name: "authorization"},
+ {Name: "cache-control"},
+ {Name: "content-disposition"},
+ {Name: "content-encoding"},
+ {Name: "content-language"},
+ {Name: "content-length"},
+ {Name: "content-location"},
+ {Name: "content-range"},
+ {Name: "content-type"},
+ {Name: "cookie"},
+ {Name: "date"},
+ {Name: "etag"},
+ {Name: "expect"},
+ {Name: "expires"},
+ {Name: "from"},
+ {Name: "host"},
+ {Name: "if-match"},
+ {Name: "if-modified-since"},
+ {Name: "if-none-match"},
+ {Name: "if-range"},
+ {Name: "if-unmodified-since"},
+ {Name: "last-modified"},
+ {Name: "link"},
+ {Name: "location"},
+ {Name: "max-forwards"},
+ {Name: "proxy-authenticate"},
+ {Name: "proxy-authorization"},
+ {Name: "range"},
+ {Name: "referer"},
+ {Name: "refresh"},
+ {Name: "retry-after"},
+ {Name: "server"},
+ {Name: "set-cookie"},
+ {Name: "strict-transport-security"},
+ {Name: "transfer-encoding"},
+ {Name: "user-agent"},
+ {Name: "vary"},
+ {Name: "via"},
+ {Name: "www-authenticate"},
+}
+
+func newStaticTable() *headerFieldTable {
+ t := &headerFieldTable{}
+ t.init()
+ for _, e := range staticTableEntries[:] {
+ t.addEntry(e)
+ }
+ return t
}
var huffmanCodes = [256]uint32{
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/http2.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/http2.go
index b6b0f9ad..d565f40e 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/http2.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/http2.go
@@ -376,12 +376,16 @@ func (s *sorter) SortStrings(ss []string) {
// validPseudoPath reports whether v is a valid :path pseudo-header
// value. It must be either:
//
-// *) a non-empty string starting with '/', but not with with "//",
+// *) a non-empty string starting with '/'
// *) the string '*', for OPTIONS requests.
//
// For now this is only used a quick check for deciding when to clean
// up Opaque URLs before sending requests from the Transport.
// See golang.org/issue/16847
+//
+// We used to enforce that the path also didn't start with "//", but
+// Google's GFE accepts such paths and Chrome sends them, so ignore
+// that part of the spec. See golang.org/issue/19103.
func validPseudoPath(v string) bool {
- return (len(v) > 0 && v[0] == '/' && (len(v) == 1 || v[1] != '/')) || v == "*"
+ return (len(v) > 0 && v[0] == '/') || v == "*"
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go16.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go16.go
index efd2e128..508cebcc 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go16.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go16.go
@@ -7,7 +7,6 @@
package http2
import (
- "crypto/tls"
"net/http"
"time"
)
@@ -20,27 +19,3 @@ func transportExpectContinueTimeout(t1 *http.Transport) time.Duration {
return 0
}
-
-// isBadCipher reports whether the cipher is blacklisted by the HTTP/2 spec.
-func isBadCipher(cipher uint16) bool {
- switch cipher {
- case tls.TLS_RSA_WITH_RC4_128_SHA,
- tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_RSA_WITH_AES_256_CBC_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
- tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
- tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA:
- // Reject cipher suites from Appendix A.
- // "This list includes those cipher suites that do not
- // offer an ephemeral key exchange and those that are
- // based on the TLS null, stream or block cipher type"
- return true
- default:
- return false
- }
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go18.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go18.go
index efbf83c3..6f8d3f86 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go18.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go18.go
@@ -25,3 +25,5 @@ func reqGetBody(req *http.Request) func() (io.ReadCloser, error) {
}
func reqBodyIsNoBody(io.ReadCloser) bool { return false }
+
+func go18httpNoBody() io.ReadCloser { return nil } // for tests only
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go19.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go19.go
new file mode 100644
index 00000000..5ae07726
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/not_go19.go
@@ -0,0 +1,16 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !go1.9
+
+package http2
+
+import (
+ "net/http"
+)
+
+func configureServer19(s *http.Server, conf *Server) error {
+ // not supported prior to go1.9
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/pipe.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/pipe.go
index 53b7a1da..a6140099 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/pipe.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/pipe.go
@@ -10,13 +10,13 @@ import (
"sync"
)
-// pipe is a goroutine-safe io.Reader/io.Writer pair. It's like
+// pipe is a goroutine-safe io.Reader/io.Writer pair. It's like
// io.Pipe except there are no PipeReader/PipeWriter halves, and the
// underlying buffer is an interface. (io.Pipe is always unbuffered)
type pipe struct {
mu sync.Mutex
- c sync.Cond // c.L lazily initialized to &p.mu
- b pipeBuffer
+ c sync.Cond // c.L lazily initialized to &p.mu
+ b pipeBuffer // nil when done reading
err error // read error once empty. non-nil means closed.
breakErr error // immediate read error (caller doesn't see rest of b)
donec chan struct{} // closed on error
@@ -32,6 +32,9 @@ type pipeBuffer interface {
func (p *pipe) Len() int {
p.mu.Lock()
defer p.mu.Unlock()
+ if p.b == nil {
+ return 0
+ }
return p.b.Len()
}
@@ -47,7 +50,7 @@ func (p *pipe) Read(d []byte) (n int, err error) {
if p.breakErr != nil {
return 0, p.breakErr
}
- if p.b.Len() > 0 {
+ if p.b != nil && p.b.Len() > 0 {
return p.b.Read(d)
}
if p.err != nil {
@@ -55,6 +58,7 @@ func (p *pipe) Read(d []byte) (n int, err error) {
p.readFn() // e.g. copy trailers
p.readFn = nil // not sticky like p.err
}
+ p.b = nil
return 0, p.err
}
p.c.Wait()
@@ -75,6 +79,9 @@ func (p *pipe) Write(d []byte) (n int, err error) {
if p.err != nil {
return 0, errClosedPipeWrite
}
+ if p.breakErr != nil {
+ return len(d), nil // discard when there is no reader
+ }
return p.b.Write(d)
}
@@ -109,6 +116,9 @@ func (p *pipe) closeWithError(dst *error, err error, fn func()) {
return
}
p.readFn = fn
+ if dst == &p.breakErr {
+ p.b = nil
+ }
*dst = err
p.closeDoneLocked()
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/server.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/server.go
index 3c6b90cc..eae143dd 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/server.go
@@ -110,9 +110,41 @@ type Server struct {
// activity for the purposes of IdleTimeout.
IdleTimeout time.Duration
+ // MaxUploadBufferPerConnection is the size of the initial flow
+ // control window for each connections. The HTTP/2 spec does not
+ // allow this to be smaller than 65535 or larger than 2^32-1.
+ // If the value is outside this range, a default value will be
+ // used instead.
+ MaxUploadBufferPerConnection int32
+
+ // MaxUploadBufferPerStream is the size of the initial flow control
+ // window for each stream. The HTTP/2 spec does not allow this to
+ // be larger than 2^32-1. If the value is zero or larger than the
+ // maximum, a default value will be used instead.
+ MaxUploadBufferPerStream int32
+
// NewWriteScheduler constructs a write scheduler for a connection.
// If nil, a default scheduler is chosen.
NewWriteScheduler func() WriteScheduler
+
+ // Internal state. This is a pointer (rather than embedded directly)
+ // so that we don't embed a Mutex in this struct, which will make the
+ // struct non-copyable, which might break some callers.
+ state *serverInternalState
+}
+
+func (s *Server) initialConnRecvWindowSize() int32 {
+ if s.MaxUploadBufferPerConnection > initialWindowSize {
+ return s.MaxUploadBufferPerConnection
+ }
+ return 1 << 20
+}
+
+func (s *Server) initialStreamRecvWindowSize() int32 {
+ if s.MaxUploadBufferPerStream > 0 {
+ return s.MaxUploadBufferPerStream
+ }
+ return 1 << 20
}
func (s *Server) maxReadFrameSize() uint32 {
@@ -129,6 +161,40 @@ func (s *Server) maxConcurrentStreams() uint32 {
return defaultMaxStreams
}
+type serverInternalState struct {
+ mu sync.Mutex
+ activeConns map[*serverConn]struct{}
+}
+
+func (s *serverInternalState) registerConn(sc *serverConn) {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ s.activeConns[sc] = struct{}{}
+ s.mu.Unlock()
+}
+
+func (s *serverInternalState) unregisterConn(sc *serverConn) {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ delete(s.activeConns, sc)
+ s.mu.Unlock()
+}
+
+func (s *serverInternalState) startGracefulShutdown() {
+ if s == nil {
+ return // if the Server was used without calling ConfigureServer
+ }
+ s.mu.Lock()
+ for sc := range s.activeConns {
+ sc.startGracefulShutdown()
+ }
+ s.mu.Unlock()
+}
+
// ConfigureServer adds HTTP/2 support to a net/http Server.
//
// The configuration conf may be nil.
@@ -141,9 +207,13 @@ func ConfigureServer(s *http.Server, conf *Server) error {
if conf == nil {
conf = new(Server)
}
+ conf.state = &serverInternalState{activeConns: make(map[*serverConn]struct{})}
if err := configureServer18(s, conf); err != nil {
return err
}
+ if err := configureServer19(s, conf); err != nil {
+ return err
+ }
if s.TLSConfig == nil {
s.TLSConfig = new(tls.Config)
@@ -255,35 +325,37 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
defer cancel()
sc := &serverConn{
- srv: s,
- hs: opts.baseConfig(),
- conn: c,
- baseCtx: baseCtx,
- remoteAddrStr: c.RemoteAddr().String(),
- bw: newBufferedWriter(c),
- handler: opts.handler(),
- streams: make(map[uint32]*stream),
- readFrameCh: make(chan readFrameResult),
- wantWriteFrameCh: make(chan FrameWriteRequest, 8),
- wantStartPushCh: make(chan startPushRequest, 8),
- wroteFrameCh: make(chan frameWriteResult, 1), // buffered; one send in writeFrameAsync
- bodyReadCh: make(chan bodyReadMsg), // buffering doesn't matter either way
- doneServing: make(chan struct{}),
- clientMaxStreams: math.MaxUint32, // Section 6.5.2: "Initially, there is no limit to this value"
- advMaxStreams: s.maxConcurrentStreams(),
- initialWindowSize: initialWindowSize,
- maxFrameSize: initialMaxFrameSize,
- headerTableSize: initialHeaderTableSize,
- serveG: newGoroutineLock(),
- pushEnabled: true,
- }
+ srv: s,
+ hs: opts.baseConfig(),
+ conn: c,
+ baseCtx: baseCtx,
+ remoteAddrStr: c.RemoteAddr().String(),
+ bw: newBufferedWriter(c),
+ handler: opts.handler(),
+ streams: make(map[uint32]*stream),
+ readFrameCh: make(chan readFrameResult),
+ wantWriteFrameCh: make(chan FrameWriteRequest, 8),
+ serveMsgCh: make(chan interface{}, 8),
+ wroteFrameCh: make(chan frameWriteResult, 1), // buffered; one send in writeFrameAsync
+ bodyReadCh: make(chan bodyReadMsg), // buffering doesn't matter either way
+ doneServing: make(chan struct{}),
+ clientMaxStreams: math.MaxUint32, // Section 6.5.2: "Initially, there is no limit to this value"
+ advMaxStreams: s.maxConcurrentStreams(),
+ initialStreamSendWindowSize: initialWindowSize,
+ maxFrameSize: initialMaxFrameSize,
+ headerTableSize: initialHeaderTableSize,
+ serveG: newGoroutineLock(),
+ pushEnabled: true,
+ }
+
+ s.state.registerConn(sc)
+ defer s.state.unregisterConn(sc)
// The net/http package sets the write deadline from the
// http.Server.WriteTimeout during the TLS handshake, but then
- // passes the connection off to us with the deadline already
- // set. Disarm it here so that it is not applied to additional
- // streams opened on this connection.
- // TODO: implement WriteTimeout fully. See Issue 18437.
+ // passes the connection off to us with the deadline already set.
+ // Write deadlines are set per stream in serverConn.newStream.
+ // Disarm the net.Conn write deadline here.
if sc.hs.WriteTimeout != 0 {
sc.conn.SetWriteDeadline(time.Time{})
}
@@ -294,6 +366,9 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
sc.writeSched = NewRandomWriteScheduler()
}
+ // These start at the RFC-specified defaults. If there is a higher
+ // configured value for inflow, that will be updated when we send a
+ // WINDOW_UPDATE shortly after sending SETTINGS.
sc.flow.add(initialWindowSize)
sc.inflow.add(initialWindowSize)
sc.hpackEncoder = hpack.NewEncoder(&sc.headerWriteBuf)
@@ -376,10 +451,9 @@ type serverConn struct {
doneServing chan struct{} // closed when serverConn.serve ends
readFrameCh chan readFrameResult // written by serverConn.readFrames
wantWriteFrameCh chan FrameWriteRequest // from handlers -> serve
- wantStartPushCh chan startPushRequest // from handlers -> serve
wroteFrameCh chan frameWriteResult // from writeFrameAsync -> serve, tickles more frame writes
bodyReadCh chan bodyReadMsg // from handlers -> serve
- testHookCh chan func(int) // code to run on the serve loop
+ serveMsgCh chan interface{} // misc messages & code to send to / run on the serve loop
flow flow // conn-wide (not stream-specific) outbound flow control
inflow flow // conn-wide inbound flow control
tlsState *tls.ConnectionState // shared by all handlers, like net/http
@@ -387,38 +461,39 @@ type serverConn struct {
writeSched WriteScheduler
// Everything following is owned by the serve loop; use serveG.check():
- serveG goroutineLock // used to verify funcs are on serve()
- pushEnabled bool
- sawFirstSettings bool // got the initial SETTINGS frame after the preface
- needToSendSettingsAck bool
- unackedSettings int // how many SETTINGS have we sent without ACKs?
- clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit)
- advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client
- curClientStreams uint32 // number of open streams initiated by the client
- curPushedStreams uint32 // number of open streams initiated by server push
- maxClientStreamID uint32 // max ever seen from client (odd), or 0 if there have been no client requests
- maxPushPromiseID uint32 // ID of the last push promise (even), or 0 if there have been no pushes
- streams map[uint32]*stream
- initialWindowSize int32
- maxFrameSize int32
- headerTableSize uint32
- peerMaxHeaderListSize uint32 // zero means unknown (default)
- canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
- writingFrame bool // started writing a frame (on serve goroutine or separate)
- writingFrameAsync bool // started a frame on its own goroutine but haven't heard back on wroteFrameCh
- needsFrameFlush bool // last frame write wasn't a flush
- inGoAway bool // we've started to or sent GOAWAY
- inFrameScheduleLoop bool // whether we're in the scheduleFrameWrite loop
- needToSendGoAway bool // we need to schedule a GOAWAY frame write
- goAwayCode ErrCode
- shutdownTimerCh <-chan time.Time // nil until used
- shutdownTimer *time.Timer // nil until used
- idleTimer *time.Timer // nil if unused
- idleTimerCh <-chan time.Time // nil if unused
+ serveG goroutineLock // used to verify funcs are on serve()
+ pushEnabled bool
+ sawFirstSettings bool // got the initial SETTINGS frame after the preface
+ needToSendSettingsAck bool
+ unackedSettings int // how many SETTINGS have we sent without ACKs?
+ clientMaxStreams uint32 // SETTINGS_MAX_CONCURRENT_STREAMS from client (our PUSH_PROMISE limit)
+ advMaxStreams uint32 // our SETTINGS_MAX_CONCURRENT_STREAMS advertised the client
+ curClientStreams uint32 // number of open streams initiated by the client
+ curPushedStreams uint32 // number of open streams initiated by server push
+ maxClientStreamID uint32 // max ever seen from client (odd), or 0 if there have been no client requests
+ maxPushPromiseID uint32 // ID of the last push promise (even), or 0 if there have been no pushes
+ streams map[uint32]*stream
+ initialStreamSendWindowSize int32
+ maxFrameSize int32
+ headerTableSize uint32
+ peerMaxHeaderListSize uint32 // zero means unknown (default)
+ canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
+ writingFrame bool // started writing a frame (on serve goroutine or separate)
+ writingFrameAsync bool // started a frame on its own goroutine but haven't heard back on wroteFrameCh
+ needsFrameFlush bool // last frame write wasn't a flush
+ inGoAway bool // we've started to or sent GOAWAY
+ inFrameScheduleLoop bool // whether we're in the scheduleFrameWrite loop
+ needToSendGoAway bool // we need to schedule a GOAWAY frame write
+ goAwayCode ErrCode
+ shutdownTimer *time.Timer // nil until used
+ idleTimer *time.Timer // nil if unused
// Owned by the writeFrameAsync goroutine:
headerWriteBuf bytes.Buffer
hpackEncoder *hpack.Encoder
+
+ // Used by startGracefulShutdown.
+ shutdownOnce sync.Once
}
func (sc *serverConn) maxHeaderListSize() uint32 {
@@ -463,10 +538,10 @@ type stream struct {
numTrailerValues int64
weight uint8
state streamState
- resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
- gotTrailerHeader bool // HEADER frame for trailers was seen
- wroteHeaders bool // whether we wrote headers (not status 100)
- reqBuf []byte // if non-nil, body pipe buffer to return later at EOF
+ resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
+ gotTrailerHeader bool // HEADER frame for trailers was seen
+ wroteHeaders bool // whether we wrote headers (not status 100)
+ writeDeadline *time.Timer // nil if unused
trailer http.Header // accumulated trailers
reqTrailer http.Header // handler's Request.Trailer
@@ -696,48 +771,48 @@ func (sc *serverConn) serve() {
{SettingMaxFrameSize, sc.srv.maxReadFrameSize()},
{SettingMaxConcurrentStreams, sc.advMaxStreams},
{SettingMaxHeaderListSize, sc.maxHeaderListSize()},
-
- // TODO: more actual settings, notably
- // SettingInitialWindowSize, but then we also
- // want to bump up the conn window size the
- // same amount here right after the settings
+ {SettingInitialWindowSize, uint32(sc.srv.initialStreamRecvWindowSize())},
},
})
sc.unackedSettings++
+ // Each connection starts with intialWindowSize inflow tokens.
+ // If a higher value is configured, we add more tokens.
+ if diff := sc.srv.initialConnRecvWindowSize() - initialWindowSize; diff > 0 {
+ sc.sendWindowUpdate(nil, int(diff))
+ }
+
if err := sc.readPreface(); err != nil {
sc.condlogf(err, "http2: server: error reading preface from client %v: %v", sc.conn.RemoteAddr(), err)
return
}
// Now that we've got the preface, get us out of the
- // "StateNew" state. We can't go directly to idle, though.
+ // "StateNew" state. We can't go directly to idle, though.
// Active means we read some data and anticipate a request. We'll
// do another Active when we get a HEADERS frame.
sc.setConnState(http.StateActive)
sc.setConnState(http.StateIdle)
if sc.srv.IdleTimeout != 0 {
- sc.idleTimer = time.NewTimer(sc.srv.IdleTimeout)
+ sc.idleTimer = time.AfterFunc(sc.srv.IdleTimeout, sc.onIdleTimer)
defer sc.idleTimer.Stop()
- sc.idleTimerCh = sc.idleTimer.C
- }
-
- var gracefulShutdownCh <-chan struct{}
- if sc.hs != nil {
- gracefulShutdownCh = h1ServerShutdownChan(sc.hs)
}
go sc.readFrames() // closed by defer sc.conn.Close above
- settingsTimer := time.NewTimer(firstSettingsTimeout)
+ settingsTimer := time.AfterFunc(firstSettingsTimeout, sc.onSettingsTimer)
+ defer settingsTimer.Stop()
+
loopNum := 0
for {
loopNum++
select {
case wr := <-sc.wantWriteFrameCh:
+ if se, ok := wr.write.(StreamError); ok {
+ sc.resetStream(se)
+ break
+ }
sc.writeFrame(wr)
- case spr := <-sc.wantStartPushCh:
- sc.startPush(spr)
case res := <-sc.wroteFrameCh:
sc.wroteFrame(res)
case res := <-sc.readFrameCh:
@@ -745,26 +820,37 @@ func (sc *serverConn) serve() {
return
}
res.readMore()
- if settingsTimer.C != nil {
+ if settingsTimer != nil {
settingsTimer.Stop()
- settingsTimer.C = nil
+ settingsTimer = nil
}
case m := <-sc.bodyReadCh:
sc.noteBodyRead(m.st, m.n)
- case <-settingsTimer.C:
- sc.logf("timeout waiting for SETTINGS frames from %v", sc.conn.RemoteAddr())
- return
- case <-gracefulShutdownCh:
- gracefulShutdownCh = nil
- sc.startGracefulShutdown()
- case <-sc.shutdownTimerCh:
- sc.vlogf("GOAWAY close timer fired; closing conn from %v", sc.conn.RemoteAddr())
- return
- case <-sc.idleTimerCh:
- sc.vlogf("connection is idle")
- sc.goAway(ErrCodeNo)
- case fn := <-sc.testHookCh:
- fn(loopNum)
+ case msg := <-sc.serveMsgCh:
+ switch v := msg.(type) {
+ case func(int):
+ v(loopNum) // for testing
+ case *serverMessage:
+ switch v {
+ case settingsTimerMsg:
+ sc.logf("timeout waiting for SETTINGS frames from %v", sc.conn.RemoteAddr())
+ return
+ case idleTimerMsg:
+ sc.vlogf("connection is idle")
+ sc.goAway(ErrCodeNo)
+ case shutdownTimerMsg:
+ sc.vlogf("GOAWAY close timer fired; closing conn from %v", sc.conn.RemoteAddr())
+ return
+ case gracefulShutdownMsg:
+ sc.startGracefulShutdownInternal()
+ default:
+ panic("unknown timer")
+ }
+ case *startPushRequest:
+ sc.startPush(v)
+ default:
+ panic(fmt.Sprintf("unexpected type %T", v))
+ }
}
if sc.inGoAway && sc.curOpenStreams() == 0 && !sc.needToSendGoAway && !sc.writingFrame {
@@ -773,6 +859,36 @@ func (sc *serverConn) serve() {
}
}
+func (sc *serverConn) awaitGracefulShutdown(sharedCh <-chan struct{}, privateCh chan struct{}) {
+ select {
+ case <-sc.doneServing:
+ case <-sharedCh:
+ close(privateCh)
+ }
+}
+
+type serverMessage int
+
+// Message values sent to serveMsgCh.
+var (
+ settingsTimerMsg = new(serverMessage)
+ idleTimerMsg = new(serverMessage)
+ shutdownTimerMsg = new(serverMessage)
+ gracefulShutdownMsg = new(serverMessage)
+)
+
+func (sc *serverConn) onSettingsTimer() { sc.sendServeMsg(settingsTimerMsg) }
+func (sc *serverConn) onIdleTimer() { sc.sendServeMsg(idleTimerMsg) }
+func (sc *serverConn) onShutdownTimer() { sc.sendServeMsg(shutdownTimerMsg) }
+
+func (sc *serverConn) sendServeMsg(msg interface{}) {
+ sc.serveG.checkNotOn() // NOT
+ select {
+ case sc.serveMsgCh <- msg:
+ case <-sc.doneServing:
+ }
+}
+
// readPreface reads the ClientPreface greeting from the peer
// or returns an error on timeout or an invalid greeting.
func (sc *serverConn) readPreface() error {
@@ -1014,7 +1130,11 @@ func (sc *serverConn) wroteFrame(res frameWriteResult) {
// stateClosed after the RST_STREAM frame is
// written.
st.state = stateHalfClosedLocal
- sc.resetStream(streamError(st.id, ErrCodeCancel))
+ // Section 8.1: a server MAY request that the client abort
+ // transmission of a request without error by sending a
+ // RST_STREAM with an error code of NO_ERROR after sending
+ // a complete response.
+ sc.resetStream(streamError(st.id, ErrCodeNo))
case stateHalfClosedRemote:
sc.closeStream(st, errHandlerComplete)
}
@@ -1086,10 +1206,19 @@ func (sc *serverConn) scheduleFrameWrite() {
sc.inFrameScheduleLoop = false
}
-// startGracefulShutdown sends a GOAWAY with ErrCodeNo to tell the
-// client we're gracefully shutting down. The connection isn't closed
-// until all current streams are done.
+// startGracefulShutdown gracefully shuts down a connection. This
+// sends GOAWAY with ErrCodeNo to tell the client we're gracefully
+// shutting down. The connection isn't closed until all current
+// streams are done.
+//
+// startGracefulShutdown returns immediately; it does not wait until
+// the connection has shut down.
func (sc *serverConn) startGracefulShutdown() {
+ sc.serveG.checkNotOn() // NOT
+ sc.shutdownOnce.Do(func() { sc.sendServeMsg(gracefulShutdownMsg) })
+}
+
+func (sc *serverConn) startGracefulShutdownInternal() {
sc.goAwayIn(ErrCodeNo, 0)
}
@@ -1121,8 +1250,7 @@ func (sc *serverConn) goAwayIn(code ErrCode, forceCloseIn time.Duration) {
func (sc *serverConn) shutDownIn(d time.Duration) {
sc.serveG.check()
- sc.shutdownTimer = time.NewTimer(d)
- sc.shutdownTimerCh = sc.shutdownTimer.C
+ sc.shutdownTimer = time.AfterFunc(d, sc.onShutdownTimer)
}
func (sc *serverConn) resetStream(se StreamError) {
@@ -1305,6 +1433,9 @@ func (sc *serverConn) closeStream(st *stream, err error) {
panic(fmt.Sprintf("invariant; can't close stream in state %v", st.state))
}
st.state = stateClosed
+ if st.writeDeadline != nil {
+ st.writeDeadline.Stop()
+ }
if st.isPushed() {
sc.curPushedStreams--
} else {
@@ -1317,7 +1448,7 @@ func (sc *serverConn) closeStream(st *stream, err error) {
sc.idleTimer.Reset(sc.srv.IdleTimeout)
}
if h1ServerKeepAlivesDisabled(sc.hs) {
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
}
}
if p := st.body; p != nil {
@@ -1395,9 +1526,9 @@ func (sc *serverConn) processSettingInitialWindowSize(val uint32) error {
// adjust the size of all stream flow control windows that it
// maintains by the difference between the new value and the
// old value."
- old := sc.initialWindowSize
- sc.initialWindowSize = int32(val)
- growth := sc.initialWindowSize - old // may be negative
+ old := sc.initialStreamSendWindowSize
+ sc.initialStreamSendWindowSize = int32(val)
+ growth := int32(val) - old // may be negative
for _, st := range sc.streams {
if !st.flow.add(growth) {
// 6.9.2 Initial Flow Control Window Size
@@ -1504,7 +1635,7 @@ func (sc *serverConn) processGoAway(f *GoAwayFrame) error {
} else {
sc.vlogf("http2: received GOAWAY %+v, starting graceful shutdown", f)
}
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
// http://tools.ietf.org/html/rfc7540#section-6.8
// We should not create any new streams, which means we should disable push.
sc.pushEnabled = false
@@ -1543,6 +1674,12 @@ func (st *stream) copyTrailersToHandlerRequest() {
}
}
+// onWriteTimeout is run on its own goroutine (from time.AfterFunc)
+// when the stream's WriteTimeout has fired.
+func (st *stream) onWriteTimeout() {
+ st.sc.writeFrameFromHandler(FrameWriteRequest{write: streamError(st.id, ErrCodeInternal)})
+}
+
func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
sc.serveG.check()
id := f.StreamID
@@ -1719,9 +1856,12 @@ func (sc *serverConn) newStream(id, pusherID uint32, state streamState) *stream
}
st.cw.Init()
st.flow.conn = &sc.flow // link to conn-level counter
- st.flow.add(sc.initialWindowSize)
- st.inflow.conn = &sc.inflow // link to conn-level counter
- st.inflow.add(initialWindowSize) // TODO: update this when we send a higher initial window size in the initial settings
+ st.flow.add(sc.initialStreamSendWindowSize)
+ st.inflow.conn = &sc.inflow // link to conn-level counter
+ st.inflow.add(sc.srv.initialStreamRecvWindowSize())
+ if sc.hs.WriteTimeout != 0 {
+ st.writeDeadline = time.AfterFunc(sc.hs.WriteTimeout, st.onWriteTimeout)
+ }
sc.streams[id] = st
sc.writeSched.OpenStream(st.id, OpenStreamOptions{PusherID: pusherID})
@@ -1785,16 +1925,14 @@ func (sc *serverConn) newWriterAndRequest(st *stream, f *MetaHeadersFrame) (*res
return nil, nil, err
}
if bodyOpen {
- st.reqBuf = getRequestBodyBuf()
- req.Body.(*requestBody).pipe = &pipe{
- b: &fixedBuffer{buf: st.reqBuf},
- }
-
if vv, ok := rp.header["Content-Length"]; ok {
req.ContentLength, _ = strconv.ParseInt(vv[0], 10, 64)
} else {
req.ContentLength = -1
}
+ req.Body.(*requestBody).pipe = &pipe{
+ b: &dataBuffer{expected: req.ContentLength},
+ }
}
return rw, req, nil
}
@@ -1890,24 +2028,6 @@ func (sc *serverConn) newWriterAndRequestNoBody(st *stream, rp requestParam) (*r
return rw, req, nil
}
-var reqBodyCache = make(chan []byte, 8)
-
-func getRequestBodyBuf() []byte {
- select {
- case b := <-reqBodyCache:
- return b
- default:
- return make([]byte, initialWindowSize)
- }
-}
-
-func putRequestBodyBuf(b []byte) {
- select {
- case reqBodyCache <- b:
- default:
- }
-}
-
// Run on its own goroutine.
func (sc *serverConn) runHandler(rw *responseWriter, req *http.Request, handler func(http.ResponseWriter, *http.Request)) {
didPanic := true
@@ -2003,12 +2123,6 @@ func (sc *serverConn) noteBodyReadFromHandler(st *stream, n int, err error) {
case <-sc.doneServing:
}
}
- if err == io.EOF {
- if buf := st.reqBuf; buf != nil {
- st.reqBuf = nil // shouldn't matter; field unused by other
- putRequestBodyBuf(buf)
- }
- }
}
func (sc *serverConn) noteBodyRead(st *stream, n int) {
@@ -2103,8 +2217,8 @@ func (b *requestBody) Read(p []byte) (n int, err error) {
return
}
-// responseWriter is the http.ResponseWriter implementation. It's
-// intentionally small (1 pointer wide) to minimize garbage. The
+// responseWriter is the http.ResponseWriter implementation. It's
+// intentionally small (1 pointer wide) to minimize garbage. The
// responseWriterState pointer inside is zeroed at the end of a
// request (in handlerDone) and calls on the responseWriter thereafter
// simply crash (caller's mistake), but the much larger responseWriterState
@@ -2138,6 +2252,7 @@ type responseWriterState struct {
wroteHeader bool // WriteHeader called (explicitly or implicitly). Not necessarily sent to user yet.
sentHeader bool // have we sent the header frame?
handlerDone bool // handler has finished
+ dirty bool // a Write failed; don't reuse this responseWriterState
sentContentLen int64 // non-zero if handler set a Content-Length header
wroteBytes int64
@@ -2219,6 +2334,7 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
date: date,
})
if err != nil {
+ rws.dirty = true
return 0, err
}
if endStream {
@@ -2240,6 +2356,7 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
if len(p) > 0 || endStream {
// only send a 0 byte DATA frame if we're ending the stream.
if err := rws.conn.writeDataFromHandler(rws.stream, p, endStream); err != nil {
+ rws.dirty = true
return 0, err
}
}
@@ -2251,6 +2368,9 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
trailers: rws.trailers,
endStream: true,
})
+ if err != nil {
+ rws.dirty = true
+ }
return len(p), err
}
return len(p), nil
@@ -2278,7 +2398,7 @@ const TrailerPrefix = "Trailer:"
// says you SHOULD (but not must) predeclare any trailers in the
// header, the official ResponseWriter rules said trailers in Go must
// be predeclared, and then we reuse the same ResponseWriter.Header()
-// map to mean both Headers and Trailers. When it's time to write the
+// map to mean both Headers and Trailers. When it's time to write the
// Trailers, we pick out the fields of Headers that were declared as
// trailers. That worked for a while, until we found the first major
// user of Trailers in the wild: gRPC (using them only over http2),
@@ -2390,7 +2510,7 @@ func cloneHeader(h http.Header) http.Header {
//
// * Handler calls w.Write or w.WriteString ->
// * -> rws.bw (*bufio.Writer) ->
-// * (Handler migth call Flush)
+// * (Handler might call Flush)
// * -> chunkWriter{rws}
// * -> responseWriterState.writeChunk(p []byte)
// * -> responseWriterState.writeChunk (most of the magic; see comment there)
@@ -2429,10 +2549,19 @@ func (w *responseWriter) write(lenData int, dataB []byte, dataS string) (n int,
func (w *responseWriter) handlerDone() {
rws := w.rws
+ dirty := rws.dirty
rws.handlerDone = true
w.Flush()
w.rws = nil
- responseWriterStatePool.Put(rws)
+ if !dirty {
+ // Only recycle the pool if all prior Write calls to
+ // the serverConn goroutine completed successfully. If
+ // they returned earlier due to resets from the peer
+ // there might still be write goroutines outstanding
+ // from the serverConn referencing the rws memory. See
+ // issue 20704.
+ responseWriterStatePool.Put(rws)
+ }
}
// Push errors.
@@ -2514,7 +2643,7 @@ func (w *responseWriter) push(target string, opts pushOptions) error {
return fmt.Errorf("method %q must be GET or HEAD", opts.Method)
}
- msg := startPushRequest{
+ msg := &startPushRequest{
parent: st,
method: opts.Method,
url: u,
@@ -2527,7 +2656,7 @@ func (w *responseWriter) push(target string, opts pushOptions) error {
return errClientDisconnected
case <-st.cw:
return errStreamClosed
- case sc.wantStartPushCh <- msg:
+ case sc.serveMsgCh <- msg:
}
select {
@@ -2549,7 +2678,7 @@ type startPushRequest struct {
done chan error
}
-func (sc *serverConn) startPush(msg startPushRequest) {
+func (sc *serverConn) startPush(msg *startPushRequest) {
sc.serveG.check()
// http://tools.ietf.org/html/rfc7540#section-6.6.
@@ -2588,7 +2717,7 @@ func (sc *serverConn) startPush(msg startPushRequest) {
// A server that is unable to establish a new stream identifier can send a GOAWAY
// frame so that the client is forced to open a new connection for new streams.
if sc.maxPushPromiseID+2 >= 1<<31 {
- sc.startGracefulShutdown()
+ sc.startGracefulShutdownInternal()
return 0, ErrPushLimitReached
}
sc.maxPushPromiseID += 2
@@ -2713,31 +2842,6 @@ var badTrailer = map[string]bool{
"Www-Authenticate": true,
}
-// h1ServerShutdownChan returns a channel that will be closed when the
-// provided *http.Server wants to shut down.
-//
-// This is a somewhat hacky way to get at http1 innards. It works
-// when the http2 code is bundled into the net/http package in the
-// standard library. The alternatives ended up making the cmd/go tool
-// depend on http Servers. This is the lightest option for now.
-// This is tested via the TestServeShutdown* tests in net/http.
-func h1ServerShutdownChan(hs *http.Server) <-chan struct{} {
- if fn := testh1ServerShutdownChan; fn != nil {
- return fn(hs)
- }
- var x interface{} = hs
- type I interface {
- getDoneChan() <-chan struct{}
- }
- if hs, ok := x.(I); ok {
- return hs.getDoneChan()
- }
- return nil
-}
-
-// optional test hook for h1ServerShutdownChan.
-var testh1ServerShutdownChan func(hs *http.Server) <-chan struct{}
-
// h1ServerKeepAlivesDisabled reports whether hs has its keep-alives
// disabled. See comments on h1ServerShutdownChan above for why
// the code is written this way.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/transport.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/transport.go
index 0c7e859d..e0dfe9f6 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/transport.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/transport.go
@@ -18,6 +18,7 @@ import (
"io/ioutil"
"log"
"math"
+ mathrand "math/rand"
"net"
"net/http"
"sort"
@@ -164,6 +165,7 @@ type ClientConn struct {
goAwayDebug string // goAway frame's debug data, retained as a string
streams map[uint32]*clientStream // client-initiated
nextStreamID uint32
+ pendingRequests int // requests blocked and waiting to be sent because len(streams) == maxConcurrentStreams
pings map[[8]byte]chan struct{} // in flight ping data to notification channel
bw *bufio.Writer
br *bufio.Reader
@@ -216,35 +218,45 @@ type clientStream struct {
resTrailer *http.Header // client's Response.Trailer
}
-// awaitRequestCancel runs in its own goroutine and waits for the user
-// to cancel a RoundTrip request, its context to expire, or for the
-// request to be done (any way it might be removed from the cc.streams
-// map: peer reset, successful completion, TCP connection breakage,
-// etc)
-func (cs *clientStream) awaitRequestCancel(req *http.Request) {
+// awaitRequestCancel waits for the user to cancel a request or for the done
+// channel to be signaled. A non-nil error is returned only if the request was
+// canceled.
+func awaitRequestCancel(req *http.Request, done <-chan struct{}) error {
ctx := reqContext(req)
if req.Cancel == nil && ctx.Done() == nil {
- return
+ return nil
}
select {
case <-req.Cancel:
- cs.cancelStream()
- cs.bufPipe.CloseWithError(errRequestCanceled)
+ return errRequestCanceled
case <-ctx.Done():
+ return ctx.Err()
+ case <-done:
+ return nil
+ }
+}
+
+// awaitRequestCancel waits for the user to cancel a request, its context to
+// expire, or for the request to be done (any way it might be removed from the
+// cc.streams map: peer reset, successful completion, TCP connection breakage,
+// etc). If the request is canceled, then cs will be canceled and closed.
+func (cs *clientStream) awaitRequestCancel(req *http.Request) {
+ if err := awaitRequestCancel(req, cs.done); err != nil {
cs.cancelStream()
- cs.bufPipe.CloseWithError(ctx.Err())
- case <-cs.done:
+ cs.bufPipe.CloseWithError(err)
}
}
func (cs *clientStream) cancelStream() {
- cs.cc.mu.Lock()
+ cc := cs.cc
+ cc.mu.Lock()
didReset := cs.didReset
cs.didReset = true
- cs.cc.mu.Unlock()
+ cc.mu.Unlock()
if !didReset {
- cs.cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
+ cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
+ cc.forgetStreamID(cs.ID)
}
}
@@ -329,7 +341,7 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res
}
addr := authorityAddr(req.URL.Scheme, req.URL.Host)
- for {
+ for retry := 0; ; retry++ {
cc, err := t.connPool().GetClientConn(req, addr)
if err != nil {
t.vlogf("http2: Transport failed to get client conn for %s: %v", addr, err)
@@ -337,9 +349,25 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res
}
traceGotConn(req, cc)
res, err := cc.RoundTrip(req)
- if err != nil {
- if req, err = shouldRetryRequest(req, err); err == nil {
- continue
+ if err != nil && retry <= 6 {
+ afterBodyWrite := false
+ if e, ok := err.(afterReqBodyWriteError); ok {
+ err = e
+ afterBodyWrite = true
+ }
+ if req, err = shouldRetryRequest(req, err, afterBodyWrite); err == nil {
+ // After the first retry, do exponential backoff with 10% jitter.
+ if retry == 0 {
+ continue
+ }
+ backoff := float64(uint(1) << (uint(retry) - 1))
+ backoff += backoff * (0.1 * mathrand.Float64())
+ select {
+ case <-time.After(time.Second * time.Duration(backoff)):
+ continue
+ case <-reqContext(req).Done():
+ return nil, reqContext(req).Err()
+ }
}
}
if err != nil {
@@ -360,43 +388,60 @@ func (t *Transport) CloseIdleConnections() {
}
var (
- errClientConnClosed = errors.New("http2: client conn is closed")
- errClientConnUnusable = errors.New("http2: client conn not usable")
-
- errClientConnGotGoAway = errors.New("http2: Transport received Server's graceful shutdown GOAWAY")
- errClientConnGotGoAwayAfterSomeReqBody = errors.New("http2: Transport received Server's graceful shutdown GOAWAY; some request body already written")
+ errClientConnClosed = errors.New("http2: client conn is closed")
+ errClientConnUnusable = errors.New("http2: client conn not usable")
+ errClientConnGotGoAway = errors.New("http2: Transport received Server's graceful shutdown GOAWAY")
)
+// afterReqBodyWriteError is a wrapper around errors returned by ClientConn.RoundTrip.
+// It is used to signal that err happened after part of Request.Body was sent to the server.
+type afterReqBodyWriteError struct {
+ err error
+}
+
+func (e afterReqBodyWriteError) Error() string {
+ return e.err.Error() + "; some request body already written"
+}
+
// shouldRetryRequest is called by RoundTrip when a request fails to get
// response headers. It is always called with a non-nil error.
// It returns either a request to retry (either the same request, or a
// modified clone), or an error if the request can't be replayed.
-func shouldRetryRequest(req *http.Request, err error) (*http.Request, error) {
- switch err {
- default:
+func shouldRetryRequest(req *http.Request, err error, afterBodyWrite bool) (*http.Request, error) {
+ if !canRetryError(err) {
return nil, err
- case errClientConnUnusable, errClientConnGotGoAway:
+ }
+ if !afterBodyWrite {
return req, nil
- case errClientConnGotGoAwayAfterSomeReqBody:
- // If the Body is nil (or http.NoBody), it's safe to reuse
- // this request and its Body.
- if req.Body == nil || reqBodyIsNoBody(req.Body) {
- return req, nil
- }
- // Otherwise we depend on the Request having its GetBody
- // func defined.
- getBody := reqGetBody(req) // Go 1.8: getBody = req.GetBody
- if getBody == nil {
- return nil, errors.New("http2: Transport: peer server initiated graceful shutdown after some of Request.Body was written; define Request.GetBody to avoid this error")
- }
- body, err := getBody()
- if err != nil {
- return nil, err
- }
- newReq := *req
- newReq.Body = body
- return &newReq, nil
}
+ // If the Body is nil (or http.NoBody), it's safe to reuse
+ // this request and its Body.
+ if req.Body == nil || reqBodyIsNoBody(req.Body) {
+ return req, nil
+ }
+ // Otherwise we depend on the Request having its GetBody
+ // func defined.
+ getBody := reqGetBody(req) // Go 1.8: getBody = req.GetBody
+ if getBody == nil {
+ return nil, fmt.Errorf("http2: Transport: cannot retry err [%v] after Request.Body was written; define Request.GetBody to avoid this error", err)
+ }
+ body, err := getBody()
+ if err != nil {
+ return nil, err
+ }
+ newReq := *req
+ newReq.Body = body
+ return &newReq, nil
+}
+
+func canRetryError(err error) bool {
+ if err == errClientConnUnusable || err == errClientConnGotGoAway {
+ return true
+ }
+ if se, ok := err.(StreamError); ok {
+ return se.Code == ErrCodeRefusedStream
+ }
+ return false
}
func (t *Transport) dialClientConn(addr string, singleUse bool) (*ClientConn, error) {
@@ -560,6 +605,8 @@ func (cc *ClientConn) setGoAway(f *GoAwayFrame) {
}
}
+// CanTakeNewRequest reports whether the connection can take a new request,
+// meaning it has not been closed or received or sent a GOAWAY.
func (cc *ClientConn) CanTakeNewRequest() bool {
cc.mu.Lock()
defer cc.mu.Unlock()
@@ -571,11 +618,10 @@ func (cc *ClientConn) canTakeNewRequestLocked() bool {
return false
}
return cc.goAway == nil && !cc.closed &&
- int64(len(cc.streams)+1) < int64(cc.maxConcurrentStreams) &&
- cc.nextStreamID < math.MaxInt32
+ int64(cc.nextStreamID)+int64(cc.pendingRequests) < math.MaxInt32
}
-// onIdleTimeout is called from a time.AfterFunc goroutine. It will
+// onIdleTimeout is called from a time.AfterFunc goroutine. It will
// only be called when we're idle, but because we're coming from a new
// goroutine, there could be a new request coming in at the same time,
// so this simply calls the synchronized closeIfIdle to shut down this
@@ -694,7 +740,7 @@ func checkConnHeaders(req *http.Request) error {
// req.ContentLength, where 0 actually means zero (not unknown) and -1
// means unknown.
func actualContentLength(req *http.Request) int64 {
- if req.Body == nil {
+ if req.Body == nil || reqBodyIsNoBody(req.Body) {
return 0
}
if req.ContentLength != 0 {
@@ -718,15 +764,14 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
hasTrailers := trailers != ""
cc.mu.Lock()
- cc.lastActive = time.Now()
- if cc.closed || !cc.canTakeNewRequestLocked() {
+ if err := cc.awaitOpenSlotForRequest(req); err != nil {
cc.mu.Unlock()
- return nil, errClientConnUnusable
+ return nil, err
}
body := req.Body
- hasBody := body != nil
contentLen := actualContentLength(req)
+ hasBody := contentLen != 0
// TODO(bradfitz): this is a copy of the logic in net/http. Unify somewhere?
var requestedGzip bool
@@ -809,21 +854,20 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
// 2xx, however, then assume the server DOES potentially
// want our body (e.g. full-duplex streaming:
// golang.org/issue/13444). If it turns out the server
- // doesn't, they'll RST_STREAM us soon enough. This is a
- // heuristic to avoid adding knobs to Transport. Hopefully
+ // doesn't, they'll RST_STREAM us soon enough. This is a
+ // heuristic to avoid adding knobs to Transport. Hopefully
// we can keep it.
bodyWriter.cancel()
cs.abortRequestBodyWrite(errStopReqBodyWrite)
}
if re.err != nil {
- if re.err == errClientConnGotGoAway {
- cc.mu.Lock()
- if cs.startedWrite {
- re.err = errClientConnGotGoAwayAfterSomeReqBody
- }
- cc.mu.Unlock()
- }
+ cc.mu.Lock()
+ afterBodyWrite := cs.startedWrite
+ cc.mu.Unlock()
cc.forgetStreamID(cs.ID)
+ if afterBodyWrite {
+ return nil, afterReqBodyWriteError{re.err}
+ }
return nil, re.err
}
res.Request = req
@@ -836,31 +880,31 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
case re := <-readLoopResCh:
return handleReadLoopResponse(re)
case <-respHeaderTimer:
- cc.forgetStreamID(cs.ID)
if !hasBody || bodyWritten {
cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
} else {
bodyWriter.cancel()
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
+ cc.forgetStreamID(cs.ID)
return nil, errTimeout
case <-ctx.Done():
- cc.forgetStreamID(cs.ID)
if !hasBody || bodyWritten {
cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
} else {
bodyWriter.cancel()
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
+ cc.forgetStreamID(cs.ID)
return nil, ctx.Err()
case <-req.Cancel:
- cc.forgetStreamID(cs.ID)
if !hasBody || bodyWritten {
cc.writeStreamReset(cs.ID, ErrCodeCancel, nil)
} else {
bodyWriter.cancel()
cs.abortRequestBodyWrite(errStopReqBodyWriteAndCancel)
}
+ cc.forgetStreamID(cs.ID)
return nil, errRequestCanceled
case <-cs.peerReset:
// processResetStream already removed the
@@ -887,6 +931,45 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) {
}
}
+// awaitOpenSlotForRequest waits until len(streams) < maxConcurrentStreams.
+// Must hold cc.mu.
+func (cc *ClientConn) awaitOpenSlotForRequest(req *http.Request) error {
+ var waitingForConn chan struct{}
+ var waitingForConnErr error // guarded by cc.mu
+ for {
+ cc.lastActive = time.Now()
+ if cc.closed || !cc.canTakeNewRequestLocked() {
+ return errClientConnUnusable
+ }
+ if int64(len(cc.streams))+1 <= int64(cc.maxConcurrentStreams) {
+ if waitingForConn != nil {
+ close(waitingForConn)
+ }
+ return nil
+ }
+ // Unfortunately, we cannot wait on a condition variable and channel at
+ // the same time, so instead, we spin up a goroutine to check if the
+ // request is canceled while we wait for a slot to open in the connection.
+ if waitingForConn == nil {
+ waitingForConn = make(chan struct{})
+ go func() {
+ if err := awaitRequestCancel(req, waitingForConn); err != nil {
+ cc.mu.Lock()
+ waitingForConnErr = err
+ cc.cond.Broadcast()
+ cc.mu.Unlock()
+ }
+ }()
+ }
+ cc.pendingRequests++
+ cc.cond.Wait()
+ cc.pendingRequests--
+ if waitingForConnErr != nil {
+ return waitingForConnErr
+ }
+ }
+}
+
// requires cc.wmu be held
func (cc *ClientConn) writeHeaders(streamID uint32, endStream bool, hdrs []byte) error {
first := true // first frame written (HEADERS is first, then CONTINUATION)
@@ -1246,7 +1329,9 @@ func (cc *ClientConn) streamByID(id uint32, andRemove bool) *clientStream {
cc.idleTimer.Reset(cc.idleTimeout)
}
close(cs.done)
- cc.cond.Broadcast() // wake up checkResetOrDone via clientStream.awaitFlowControl
+ // Wake up checkResetOrDone via clientStream.awaitFlowControl and
+ // wake up RoundTrip if there is a pending request.
+ cc.cond.Broadcast()
}
return cs
}
@@ -1345,8 +1430,9 @@ func (rl *clientConnReadLoop) run() error {
cc.vlogf("http2: Transport readFrame error on conn %p: (%T) %v", cc, err, err)
}
if se, ok := err.(StreamError); ok {
- if cs := cc.streamByID(se.StreamID, true /*ended; remove it*/); cs != nil {
+ if cs := cc.streamByID(se.StreamID, false); cs != nil {
cs.cc.writeStreamReset(cs.ID, se.Code, err)
+ cs.cc.forgetStreamID(cs.ID)
if se.Cause == nil {
se.Cause = cc.fr.errDetail
}
@@ -1528,8 +1614,7 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
return res, nil
}
- buf := new(bytes.Buffer) // TODO(bradfitz): recycle this garbage
- cs.bufPipe = pipe{b: buf}
+ cs.bufPipe = pipe{b: &dataBuffer{expected: res.ContentLength}}
cs.bytesRemain = res.ContentLength
res.Body = transportResponseBody{cs}
go cs.awaitRequestCancel(cs.req)
@@ -1656,6 +1741,7 @@ func (b transportResponseBody) Close() error {
cc.wmu.Lock()
if !serverSentStreamEnd {
cc.fr.WriteRSTStream(cs.ID, ErrCodeCancel)
+ cs.didReset = true
}
// Return connection-level flow control.
if unread > 0 {
@@ -1668,6 +1754,7 @@ func (b transportResponseBody) Close() error {
}
cs.bufPipe.BreakWithError(errClosedResponseBody)
+ cc.forgetStreamID(cs.ID)
return nil
}
@@ -1703,12 +1790,6 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error {
return nil
}
if f.Length > 0 {
- if len(data) > 0 && cs.bufPipe.b == nil {
- // Data frame after it's already closed?
- cc.logf("http2: Transport received DATA frame for closed stream; closing connection")
- return ConnectionError(ErrCodeProtocol)
- }
-
// Check connection-level flow control.
cc.mu.Lock()
if cs.inflow.available() >= int32(f.Length) {
@@ -1719,16 +1800,27 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error {
}
// Return any padded flow control now, since we won't
// refund it later on body reads.
- if pad := int32(f.Length) - int32(len(data)); pad > 0 {
- cs.inflow.add(pad)
- cc.inflow.add(pad)
+ var refund int
+ if pad := int(f.Length) - len(data); pad > 0 {
+ refund += pad
+ }
+ // Return len(data) now if the stream is already closed,
+ // since data will never be read.
+ didReset := cs.didReset
+ if didReset {
+ refund += len(data)
+ }
+ if refund > 0 {
+ cc.inflow.add(int32(refund))
cc.wmu.Lock()
- cc.fr.WriteWindowUpdate(0, uint32(pad))
- cc.fr.WriteWindowUpdate(cs.ID, uint32(pad))
+ cc.fr.WriteWindowUpdate(0, uint32(refund))
+ if !didReset {
+ cs.inflow.add(int32(refund))
+ cc.fr.WriteWindowUpdate(cs.ID, uint32(refund))
+ }
cc.bw.Flush()
cc.wmu.Unlock()
}
- didReset := cs.didReset
cc.mu.Unlock()
if len(data) > 0 && !didReset {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/writesched_priority.go b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/writesched_priority.go
index 01132721..848fed6e 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/writesched_priority.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/golang.org/x/net/http2/writesched_priority.go
@@ -53,7 +53,7 @@ type PriorityWriteSchedulerConfig struct {
}
// NewPriorityWriteScheduler constructs a WriteScheduler that schedules
-// frames by following HTTP/2 priorities as described in RFC 7340 Section 5.3.
+// frames by following HTTP/2 priorities as described in RFC 7540 Section 5.3.
// If cfg is nil, default options are used.
func NewPriorityWriteScheduler(cfg *PriorityWriteSchedulerConfig) WriteScheduler {
if cfg == nil {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/LICENSE
new file mode 100644
index 00000000..d6456956
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
new file mode 100644
index 00000000..40e79375
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/genproto/googleapis/rpc/status/status.pb.go
@@ -0,0 +1,143 @@
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// source: google/rpc/status.proto
+
+/*
+Package status is a generated protocol buffer package.
+
+It is generated from these files:
+ google/rpc/status.proto
+
+It has these top-level messages:
+ Status
+*/
+package status
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+import google_protobuf "github.com/golang/protobuf/ptypes/any"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+// The `Status` type defines a logical error model that is suitable for different
+// programming environments, including REST APIs and RPC APIs. It is used by
+// [gRPC](https://github.com/grpc). The error model is designed to be:
+//
+// - Simple to use and understand for most users
+// - Flexible enough to meet unexpected needs
+//
+// # Overview
+//
+// The `Status` message contains three pieces of data: error code, error message,
+// and error details. The error code should be an enum value of
+// [google.rpc.Code][google.rpc.Code], but it may accept additional error codes if needed. The
+// error message should be a developer-facing English message that helps
+// developers *understand* and *resolve* the error. If a localized user-facing
+// error message is needed, put the localized message in the error details or
+// localize it in the client. The optional error details may contain arbitrary
+// information about the error. There is a predefined set of error detail types
+// in the package `google.rpc` which can be used for common error conditions.
+//
+// # Language mapping
+//
+// The `Status` message is the logical representation of the error model, but it
+// is not necessarily the actual wire format. When the `Status` message is
+// exposed in different client libraries and different wire protocols, it can be
+// mapped differently. For example, it will likely be mapped to some exceptions
+// in Java, but more likely mapped to some error codes in C.
+//
+// # Other uses
+//
+// The error model and the `Status` message can be used in a variety of
+// environments, either with or without APIs, to provide a
+// consistent developer experience across different environments.
+//
+// Example uses of this error model include:
+//
+// - Partial errors. If a service needs to return partial errors to the client,
+// it may embed the `Status` in the normal response to indicate the partial
+// errors.
+//
+// - Workflow errors. A typical workflow has multiple steps. Each step may
+// have a `Status` message for error reporting purpose.
+//
+// - Batch operations. If a client uses batch request and batch response, the
+// `Status` message should be used directly inside batch response, one for
+// each error sub-response.
+//
+// - Asynchronous operations. If an API call embeds asynchronous operation
+// results in its response, the status of those operations should be
+// represented directly using the `Status` message.
+//
+// - Logging. If some API errors are stored in logs, the message `Status` could
+// be used directly after any stripping needed for security/privacy reasons.
+type Status struct {
+ // The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].
+ Code int32 `protobuf:"varint,1,opt,name=code" json:"code,omitempty"`
+ // A developer-facing error message, which should be in English. Any
+ // user-facing error message should be localized and sent in the
+ // [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.
+ Message string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"`
+ // A list of messages that carry the error details. There will be a
+ // common set of message types for APIs to use.
+ Details []*google_protobuf.Any `protobuf:"bytes,3,rep,name=details" json:"details,omitempty"`
+}
+
+func (m *Status) Reset() { *m = Status{} }
+func (m *Status) String() string { return proto.CompactTextString(m) }
+func (*Status) ProtoMessage() {}
+func (*Status) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+func (m *Status) GetCode() int32 {
+ if m != nil {
+ return m.Code
+ }
+ return 0
+}
+
+func (m *Status) GetMessage() string {
+ if m != nil {
+ return m.Message
+ }
+ return ""
+}
+
+func (m *Status) GetDetails() []*google_protobuf.Any {
+ if m != nil {
+ return m.Details
+ }
+ return nil
+}
+
+func init() {
+ proto.RegisterType((*Status)(nil), "google.rpc.Status")
+}
+
+func init() { proto.RegisterFile("google/rpc/status.proto", fileDescriptor0) }
+
+var fileDescriptor0 = []byte{
+ // 209 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0xcf, 0xcf, 0x4f,
+ 0xcf, 0x49, 0xd5, 0x2f, 0x2a, 0x48, 0xd6, 0x2f, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xd6, 0x2b, 0x28,
+ 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x82, 0x48, 0xe8, 0x15, 0x15, 0x24, 0x4b, 0x49, 0x42, 0x15, 0x81,
+ 0x65, 0x92, 0x4a, 0xd3, 0xf4, 0x13, 0xf3, 0x2a, 0x21, 0xca, 0x94, 0xd2, 0xb8, 0xd8, 0x82, 0xc1,
+ 0xda, 0x84, 0x84, 0xb8, 0x58, 0x92, 0xf3, 0x53, 0x52, 0x25, 0x18, 0x15, 0x18, 0x35, 0x58, 0x83,
+ 0xc0, 0x6c, 0x21, 0x09, 0x2e, 0xf6, 0xdc, 0xd4, 0xe2, 0xe2, 0xc4, 0xf4, 0x54, 0x09, 0x26, 0x05,
+ 0x46, 0x0d, 0xce, 0x20, 0x18, 0x57, 0x48, 0x8f, 0x8b, 0x3d, 0x25, 0xb5, 0x24, 0x31, 0x33, 0xa7,
+ 0x58, 0x82, 0x59, 0x81, 0x59, 0x83, 0xdb, 0x48, 0x44, 0x0f, 0x6a, 0x21, 0xcc, 0x12, 0x3d, 0xc7,
+ 0xbc, 0xca, 0x20, 0x98, 0x22, 0xa7, 0x38, 0x2e, 0xbe, 0xe4, 0xfc, 0x5c, 0x3d, 0x84, 0xa3, 0x9c,
+ 0xb8, 0x21, 0xf6, 0x06, 0x80, 0x94, 0x07, 0x30, 0x46, 0x99, 0x43, 0xa5, 0xd2, 0xf3, 0x73, 0x12,
+ 0xf3, 0xd2, 0xf5, 0xf2, 0x8b, 0xd2, 0xf5, 0xd3, 0x53, 0xf3, 0xc0, 0x86, 0xe9, 0x43, 0xa4, 0x12,
+ 0x0b, 0x32, 0x8b, 0x91, 0xfc, 0x69, 0x0d, 0xa1, 0x16, 0x31, 0x31, 0x07, 0x05, 0x38, 0x27, 0xb1,
+ 0x81, 0x55, 0x1a, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0xa4, 0x53, 0xf0, 0x7c, 0x10, 0x01, 0x00,
+ 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/AUTHORS b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/AUTHORS
new file mode 100644
index 00000000..e491a9e7
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/AUTHORS
@@ -0,0 +1 @@
+Google Inc.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/CONTRIBUTING.md b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/CONTRIBUTING.md
index 36cd6f75..a5c6e06e 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/CONTRIBUTING.md
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/CONTRIBUTING.md
@@ -1,46 +1,32 @@
# How to contribute
-We definitely welcome patches and contribution to grpc! Here are some guidelines
-and information about how to do so.
+We definitely welcome your patches and contributions to gRPC!
-## Sending patches
-
-### Getting started
-
-1. Check out the code:
-
- $ go get google.golang.org/grpc
- $ cd $GOPATH/src/google.golang.org/grpc
-
-1. Create a fork of the grpc-go repository.
-1. Add your fork as a remote:
-
- $ git remote add fork git@github.com:$YOURGITHUBUSERNAME/grpc-go.git
-
-1. Make changes, commit them.
-1. Run the test suite:
-
- $ make test
-
-1. Push your changes to your fork:
-
- $ git push fork ...
-
-1. Open a pull request.
+If you are new to github, please start by reading [Pull Request howto](https://help.github.com/articles/about-pull-requests/)
## Legal requirements
In order to protect both you and ourselves, you will need to sign the
[Contributor License Agreement](https://cla.developers.google.com/clas).
-## Filing Issues
-When filing an issue, make sure to answer these five questions:
-
-1. What version of Go are you using (`go version`)?
-2. What operating system and processor architecture are you using?
-3. What did you do?
-4. What did you expect to see?
-5. What did you see instead?
-
-### Contributing code
-Unless otherwise noted, the Go source files are distributed under the BSD-style license found in the LICENSE file.
+## Guidelines for Pull Requests
+How to get your contributions merged smoothly and quickly.
+
+- Create **small PRs** that are narrowly focused on **addressing a single concern**. We often times receive PRs that are trying to fix several things at a time, but only one fix is considered acceptable, nothing gets merged and both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy.
+
+- For speculative changes, consider opening an issue and discussing it first. If you are suggesting a behavioral or API change, consider starting with a [gRFC proposal](https://github.com/grpc/proposal).
+
+- Provide a good **PR description** as a record of **what** change is being made and **why** it was made. Link to a github issue if it exists.
+
+- Don't fix code style and formatting unless you are already changing that line to address an issue. PRs with irrelevant changes won't be merged. If you do want to fix formatting or style, do that in a separate PR.
+
+- Unless your PR is trivial, you should expect there will be reviewer comments that you'll need to address before merging. We expect you to be reasonably responsive to those comments, otherwise the PR will be closed after 2-3 weeks of inactivity.
+
+- Maintain **clean commit history** and use **meaningful commit messages**. PRs with messy commit history are difficult to review and won't be merged. Use `rebase -i upstream/master` to curate your commit history and/or to bring in latest changes from master (but avoid rebasing in the middle of a code review).
+
+- Keep your PR up to date with upstream/master (if there are merge conflicts, we can't really merge your change).
+
+- **All tests need to be passing** before your change can be merged. We recommend you **run tests locally** before creating your PR to catch breakages early on.
+
+- Exceptions to the rules can be made if there's a compelling reason for doing so.
+
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/LICENSE b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/LICENSE
index f4988b45..d6456956 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/LICENSE
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/LICENSE
@@ -1,28 +1,202 @@
-Copyright 2014, Google Inc.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
- * Redistributions of source code must retain the above copyright
-notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above
-copyright notice, this list of conditions and the following disclaimer
-in the documentation and/or other materials provided with the
-distribution.
- * Neither the name of Google Inc. nor the names of its
-contributors may be used to endorse or promote products derived from
-this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/PATENTS b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/PATENTS
deleted file mode 100644
index 69b47959..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/PATENTS
+++ /dev/null
@@ -1,22 +0,0 @@
-Additional IP Rights Grant (Patents)
-
-"This implementation" means the copyrightable works distributed by
-Google as part of the gRPC project.
-
-Google hereby grants to You a perpetual, worldwide, non-exclusive,
-no-charge, royalty-free, irrevocable (except as stated in this section)
-patent license to make, have made, use, offer to sell, sell, import,
-transfer and otherwise run, modify and propagate the contents of this
-implementation of gRPC, where such license applies only to those patent
-claims, both currently owned or controlled by Google and acquired in
-the future, licensable by Google that are necessarily infringed by this
-implementation of gRPC. This grant does not include claims that would be
-infringed only as a consequence of further modification of this
-implementation. If you or your agent or exclusive licensee institute or
-order or agree to the institution of patent litigation against any
-entity (including a cross-claim or counterclaim in a lawsuit) alleging
-that this implementation of gRPC or any code incorporated within this
-implementation of gRPC constitutes direct or contributory patent
-infringement, or inducement of patent infringement, then any patent
-rights granted to you under this License for this implementation of gRPC
-shall terminate as of the date such litigation is filed.
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/README.md b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/README.md
index 39120c20..72c7325c 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/README.md
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/README.md
@@ -1,8 +1,8 @@
-#gRPC-Go
+# gRPC-Go
[![Build Status](https://travis-ci.org/grpc/grpc-go.svg)](https://travis-ci.org/grpc/grpc-go) [![GoDoc](https://godoc.org/google.golang.org/grpc?status.svg)](https://godoc.org/google.golang.org/grpc)
-The Go implementation of [gRPC](http://www.grpc.io/): A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the [gRPC Quick Start](http://www.grpc.io/docs/) guide.
+The Go implementation of [gRPC](https://grpc.io/): A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the [gRPC Quick Start: Go](https://grpc.io/docs/quickstart/go.html) guide.
Installation
------------
@@ -16,23 +16,7 @@ $ go get google.golang.org/grpc
Prerequisites
-------------
-This requires Go 1.5 or later.
-
-A note on the version used: significant performance improvements in benchmarks
-of grpc-go have been seen by upgrading the go version from 1.5 to the latest
-1.7.1.
-
-From https://golang.org/doc/install, one way to install the latest version of go is:
-```
-$ GO_VERSION=1.7.1
-$ OS=linux
-$ ARCH=amd64
-$ curl -O https://storage.googleapis.com/golang/go${GO_VERSION}.${OS}-${ARCH}.tar.gz
-$ sudo tar -C /usr/local -xzf go$GO_VERSION.$OS-$ARCH.tar.gz
-$ # Put go on the PATH, keep the usual installation dir
-$ sudo ln -s /usr/local/go/bin/go /usr/bin/go
-$ rm go$GO_VERSION.$OS-$ARCH.tar.gz
-```
+This requires Go 1.6 or later.
Constraints
-----------
@@ -42,9 +26,13 @@ Documentation
-------------
See [API documentation](https://godoc.org/google.golang.org/grpc) for package and API descriptions and find examples in the [examples directory](examples/).
+Performance
+-----------
+See the current benchmarks for some of the languages supported in [this dashboard](https://performance-dot-grpc-testing.appspot.com/explore?dashboard=5652536396611584&widget=490377658&container=1286539696).
+
Status
------
-GA
+General Availability [Google Cloud Platform Launch Stages](https://cloud.google.com/terms/launch-stages).
FAQ
---
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/backoff.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/backoff.go
index c99024ee..090fbe87 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/backoff.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/backoff.go
@@ -1,3 +1,21 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
package grpc
import (
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/balancer.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/balancer.go
index 9d943fba..cde472c8 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/balancer.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/balancer.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -35,6 +20,7 @@ package grpc
import (
"fmt"
+ "net"
"sync"
"golang.org/x/net/context"
@@ -60,6 +46,10 @@ type BalancerConfig struct {
// use to dial to a remote load balancer server. The Balancer implementations
// can ignore this if it does not need to talk to another party securely.
DialCreds credentials.TransportCredentials
+ // Dialer is the custom dialer the Balancer implementation can use to dial
+ // to a remote load balancer server. The Balancer implementations
+ // can ignore this if it doesn't need to talk to remote balancer.
+ Dialer func(context.Context, string) (net.Conn, error)
}
// BalancerGetOptions configures a Get call.
@@ -167,7 +157,7 @@ type roundRobin struct {
func (rr *roundRobin) watchAddrUpdates() error {
updates, err := rr.w.Next()
if err != nil {
- grpclog.Printf("grpc: the naming watcher stops working due to %v.\n", err)
+ grpclog.Warningf("grpc: the naming watcher stops working due to %v.", err)
return err
}
rr.mu.Lock()
@@ -183,7 +173,7 @@ func (rr *roundRobin) watchAddrUpdates() error {
for _, v := range rr.addrs {
if addr == v.addr {
exist = true
- grpclog.Println("grpc: The name resolver wanted to add an existing address: ", addr)
+ grpclog.Infoln("grpc: The name resolver wanted to add an existing address: ", addr)
break
}
}
@@ -200,7 +190,7 @@ func (rr *roundRobin) watchAddrUpdates() error {
}
}
default:
- grpclog.Println("Unknown update.Op ", update.Op)
+ grpclog.Errorln("Unknown update.Op ", update.Op)
}
}
// Make a copy of rr.addrs and write it onto rr.addrCh so that gRPC internals gets notified.
@@ -211,6 +201,10 @@ func (rr *roundRobin) watchAddrUpdates() error {
if rr.done {
return ErrClientConnClosing
}
+ select {
+ case <-rr.addrCh:
+ default:
+ }
rr.addrCh <- open
return nil
}
@@ -233,7 +227,7 @@ func (rr *roundRobin) Start(target string, config BalancerConfig) error {
return err
}
rr.w = w
- rr.addrCh = make(chan []Address)
+ rr.addrCh = make(chan []Address, 1)
go func() {
for {
if err := rr.watchAddrUpdates(); err != nil {
@@ -385,6 +379,9 @@ func (rr *roundRobin) Notify() <-chan []Address {
func (rr *roundRobin) Close() error {
rr.mu.Lock()
defer rr.mu.Unlock()
+ if rr.done {
+ return errBalancerClosed
+ }
rr.done = true
if rr.w != nil {
rr.w.Close()
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/call.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/call.go
index ba177219..797190f1 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/call.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/call.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -36,13 +21,14 @@ package grpc
import (
"bytes"
"io"
- "math"
"time"
"golang.org/x/net/context"
"golang.org/x/net/trace"
"google.golang.org/grpc/codes"
+ "google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
@@ -72,14 +58,17 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
}
}
for {
- if err = recv(p, dopts.codec, stream, dopts.dc, reply, math.MaxInt32, inPayload); err != nil {
+ if c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ if err = recv(p, dopts.codec, stream, dopts.dc, reply, *c.maxReceiveMessageSize, inPayload); err != nil {
if err == io.EOF {
break
}
return
}
}
- if inPayload != nil && err == io.EOF && stream.StatusCode() == codes.OK {
+ if inPayload != nil && err == io.EOF && stream.Status().Code() == codes.OK {
// TODO in the current implementation, inTrailer may be handled before inPayload in some cases.
// Fix the order if necessary.
dopts.copts.StatsHandler.HandleRPC(ctx, inPayload)
@@ -89,11 +78,7 @@ func recvResponse(ctx context.Context, dopts dialOptions, t transport.ClientTran
}
// sendRequest writes out various information of an RPC such as Context and Message.
-func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, callHdr *transport.CallHdr, t transport.ClientTransport, args interface{}, opts *transport.Options) (_ *transport.Stream, err error) {
- stream, err := t.NewStream(ctx, callHdr)
- if err != nil {
- return nil, err
- }
+func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor, c *callInfo, callHdr *transport.CallHdr, stream *transport.Stream, t transport.ClientTransport, args interface{}, opts *transport.Options) (err error) {
defer func() {
if err != nil {
// If err is connection error, t will be closed, no need to close stream here.
@@ -116,7 +101,13 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
}
outBuf, err := encode(dopts.codec, args, compressor, cbuf, outPayload)
if err != nil {
- return nil, Errorf(codes.Internal, "grpc: %v", err)
+ return err
+ }
+ if c.maxSendMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)")
+ }
+ if len(outBuf) > *c.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(outBuf), *c.maxSendMessageSize)
}
err = t.Write(stream, outBuf, opts)
if err == nil && outPayload != nil {
@@ -127,10 +118,10 @@ func sendRequest(ctx context.Context, dopts dialOptions, compressor Compressor,
// does not exist.) so that t.Write could get io.EOF from wait(...). Leave the following
// recvResponse to get the final status.
if err != nil && err != io.EOF {
- return nil, err
+ return err
}
// Sent successfully.
- return stream, nil
+ return nil
}
// Invoke sends the RPC request on the wire and returns after response is received.
@@ -145,14 +136,18 @@ func Invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
func invoke(ctx context.Context, method string, args, reply interface{}, cc *ClientConn, opts ...CallOption) (e error) {
c := defaultCallInfo
- if mc, ok := cc.getMethodConfig(method); ok {
- c.failFast = !mc.WaitForReady
- if mc.Timeout > 0 {
- var cancel context.CancelFunc
- ctx, cancel = context.WithTimeout(ctx, mc.Timeout)
- defer cancel()
- }
+ mc := cc.GetMethodConfig(method)
+ if mc.WaitForReady != nil {
+ c.failFast = !*mc.WaitForReady
+ }
+
+ if mc.Timeout != nil && *mc.Timeout >= 0 {
+ var cancel context.CancelFunc
+ ctx, cancel = context.WithTimeout(ctx, *mc.Timeout)
+ defer cancel()
}
+
+ opts = append(cc.dopts.callOptions, opts...)
for _, o := range opts {
if err := o.before(&c); err != nil {
return toRPCErr(err)
@@ -163,6 +158,10 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
o.after(&c)
}
}()
+
+ c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize)
+ c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize)
+
if EnableTracing {
c.traceInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method)
defer c.traceInfo.tr.Finish()
@@ -179,26 +178,25 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
}
}()
}
+ ctx = newContextWithRPCInfo(ctx)
sh := cc.dopts.copts.StatsHandler
if sh != nil {
- ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
+ ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast})
begin := &stats.Begin{
Client: true,
BeginTime: time.Now(),
FailFast: c.failFast,
}
sh.HandleRPC(ctx, begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
Client: true,
EndTime: time.Now(),
Error: e,
}
sh.HandleRPC(ctx, end)
- }
- }()
+ }()
+ }
topts := &transport.Options{
Last: true,
Delay: false,
@@ -220,6 +218,9 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
if cc.dopts.cp != nil {
callHdr.SendCompress = cc.dopts.cp.Type()
}
+ if c.creds != nil {
+ callHdr.Creds = c.creds
+ }
gopts := BalancerGetOptions{
BlockingWait: !c.failFast,
@@ -227,7 +228,7 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
t, put, err = cc.getTransport(ctx, gopts)
if err != nil {
// TODO(zhaoq): Probably revisit the error handling.
- if _, ok := err.(*rpcError); ok {
+ if _, ok := status.FromError(err); ok {
return err
}
if err == errConnClosing || err == errConnUnavailable {
@@ -242,19 +243,38 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
if c.traceInfo.tr != nil {
c.traceInfo.tr.LazyLog(&payload{sent: true, msg: args}, true)
}
- stream, err = sendRequest(ctx, cc.dopts, cc.dopts.cp, callHdr, t, args, topts)
+ stream, err = t.NewStream(ctx, callHdr)
if err != nil {
if put != nil {
+ if _, ok := err.(transport.ConnectionError); ok {
+ // If error is connection error, transport was sending data on wire,
+ // and we are not sure if anything has been sent on wire.
+ // If error is not connection error, we are sure nothing has been sent.
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false})
+ }
+ put()
+ }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
+ continue
+ }
+ return toRPCErr(err)
+ }
+ if peer, ok := peer.FromContext(stream.Context()); ok {
+ c.peer = peer
+ }
+ err = sendRequest(ctx, cc.dopts, cc.dopts.cp, &c, callHdr, stream, t, args, topts)
+ if err != nil {
+ if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
// Retry a non-failfast RPC when
// i) there is a connection error; or
// ii) the server started to drain before this RPC was initiated.
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return toRPCErr(err)
@@ -262,13 +282,13 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
err = recvResponse(ctx, cc.dopts, t, &c, stream, reply)
if err != nil {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return toRPCErr(err)
@@ -278,9 +298,12 @@ func invoke(ctx context.Context, method string, args, reply interface{}, cc *Cli
}
t.CloseStream(stream, nil)
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{
+ bytesSent: stream.BytesSent(),
+ bytesReceived: stream.BytesReceived(),
+ })
put()
- put = nil
}
- return Errorf(stream.StatusCode(), "%s", stream.StatusDesc())
+ return stream.Status().Err()
}
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/clientconn.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/clientconn.go
index 146166a7..e3f6cb19 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/clientconn.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/clientconn.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -35,7 +20,6 @@ package grpc
import (
"errors"
- "fmt"
"net"
"strings"
"sync"
@@ -43,8 +27,10 @@ import (
"golang.org/x/net/context"
"golang.org/x/net/trace"
+ "google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/stats"
"google.golang.org/grpc/transport"
)
@@ -55,8 +41,7 @@ var (
ErrClientConnClosing = errors.New("grpc: the client connection is closing")
// ErrClientConnTimeout indicates that the ClientConn cannot establish the
// underlying connections within the specified timeout.
- // DEPRECATED: Please use context.DeadlineExceeded instead. This error will be
- // removed in Q1 2017.
+ // DEPRECATED: Please use context.DeadlineExceeded instead.
ErrClientConnTimeout = errors.New("grpc: timed out when dialing")
// errNoTransportSecurity indicates that there is no transport security
@@ -78,7 +63,8 @@ var (
errConnClosing = errors.New("grpc: the connection is closing")
// errConnUnavailable indicates that the connection is unavailable.
errConnUnavailable = errors.New("grpc: the connection is unavailable")
- errNoAddr = errors.New("grpc: there is no address available to dial")
+ // errBalancerClosed indicates that the balancer is closed.
+ errBalancerClosed = errors.New("grpc: balancer is closed")
// minimum time to give a connection to complete
minConnectTimeout = 20 * time.Second
)
@@ -86,23 +72,57 @@ var (
// dialOptions configure a Dial call. dialOptions are set by the DialOption
// values passed to Dial.
type dialOptions struct {
- unaryInt UnaryClientInterceptor
- streamInt StreamClientInterceptor
- codec Codec
- cp Compressor
- dc Decompressor
- bs backoffStrategy
- balancer Balancer
- block bool
- insecure bool
- timeout time.Duration
- scChan <-chan ServiceConfig
- copts transport.ConnectOptions
+ unaryInt UnaryClientInterceptor
+ streamInt StreamClientInterceptor
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ bs backoffStrategy
+ balancer Balancer
+ block bool
+ insecure bool
+ timeout time.Duration
+ scChan <-chan ServiceConfig
+ copts transport.ConnectOptions
+ callOptions []CallOption
}
+const (
+ defaultClientMaxReceiveMessageSize = 1024 * 1024 * 4
+ defaultClientMaxSendMessageSize = 1024 * 1024 * 4
+)
+
// DialOption configures how we set up the connection.
type DialOption func(*dialOptions)
+// WithInitialWindowSize returns a DialOption which sets the value for initial window size on a stream.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func WithInitialWindowSize(s int32) DialOption {
+ return func(o *dialOptions) {
+ o.copts.InitialWindowSize = s
+ }
+}
+
+// WithInitialConnWindowSize returns a DialOption which sets the value for initial window size on a connection.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func WithInitialConnWindowSize(s int32) DialOption {
+ return func(o *dialOptions) {
+ o.copts.InitialConnWindowSize = s
+ }
+}
+
+// WithMaxMsgSize returns a DialOption which sets the maximum message size the client can receive. Deprecated: use WithDefaultCallOptions(MaxCallRecvMsgSize(s)) instead.
+func WithMaxMsgSize(s int) DialOption {
+ return WithDefaultCallOptions(MaxCallRecvMsgSize(s))
+}
+
+// WithDefaultCallOptions returns a DialOption which sets the default CallOptions for calls over the connection.
+func WithDefaultCallOptions(cos ...CallOption) DialOption {
+ return func(o *dialOptions) {
+ o.callOptions = append(o.callOptions, cos...)
+ }
+}
+
// WithCodec returns a DialOption which sets a codec for message marshaling and unmarshaling.
func WithCodec(c Codec) DialOption {
return func(o *dialOptions) {
@@ -194,7 +214,7 @@ func WithTransportCredentials(creds credentials.TransportCredentials) DialOption
}
// WithPerRPCCredentials returns a DialOption which sets
-// credentials which will place auth state on each outbound RPC.
+// credentials and places auth state on each outbound RPC.
func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption {
return func(o *dialOptions) {
o.copts.PerRPCCredentials = append(o.copts.PerRPCCredentials, creds)
@@ -203,6 +223,7 @@ func WithPerRPCCredentials(creds credentials.PerRPCCredentials) DialOption {
// WithTimeout returns a DialOption that configures a timeout for dialing a ClientConn
// initially. This is valid if and only if WithBlock() is present.
+// Deprecated: use DialContext and context.WithTimeout instead.
func WithTimeout(d time.Duration) DialOption {
return func(o *dialOptions) {
o.timeout = d
@@ -231,7 +252,7 @@ func WithStatsHandler(h stats.Handler) DialOption {
}
}
-// FailOnNonTempDialError returns a DialOption that specified if gRPC fails on non-temporary dial errors.
+// FailOnNonTempDialError returns a DialOption that specifies if gRPC fails on non-temporary dial errors.
// If f is true, and dialer returns a non-temporary error, gRPC will fail the connection to the network
// address and won't try to reconnect.
// The default value of FailOnNonTempDialError is false.
@@ -249,6 +270,13 @@ func WithUserAgent(s string) DialOption {
}
}
+// WithKeepaliveParams returns a DialOption that specifies keepalive paramaters for the client transport.
+func WithKeepaliveParams(kp keepalive.ClientParameters) DialOption {
+ return func(o *dialOptions) {
+ o.copts.KeepaliveParams = kp
+ }
+}
+
// WithUnaryInterceptor returns a DialOption that specifies the interceptor for unary RPCs.
func WithUnaryInterceptor(f UnaryClientInterceptor) DialOption {
return func(o *dialOptions) {
@@ -263,25 +291,52 @@ func WithStreamInterceptor(f StreamClientInterceptor) DialOption {
}
}
+// WithAuthority returns a DialOption that specifies the value to be used as
+// the :authority pseudo-header. This value only works with WithInsecure and
+// has no effect if TransportCredentials are present.
+func WithAuthority(a string) DialOption {
+ return func(o *dialOptions) {
+ o.copts.Authority = a
+ }
+}
+
// Dial creates a client connection to the given target.
func Dial(target string, opts ...DialOption) (*ClientConn, error) {
return DialContext(context.Background(), target, opts...)
}
// DialContext creates a client connection to the given target. ctx can be used to
-// cancel or expire the pending connecting. Once this function returns, the
+// cancel or expire the pending connection. Once this function returns, the
// cancellation and expiration of ctx will be noop. Users should call ClientConn.Close
// to terminate all the pending operations after this function returns.
-// This is the EXPERIMENTAL API.
func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *ClientConn, err error) {
cc := &ClientConn{
target: target,
+ csMgr: &connectivityStateManager{},
conns: make(map[Address]*addrConn),
}
+ cc.csEvltr = &connectivityStateEvaluator{csMgr: cc.csMgr}
cc.ctx, cc.cancel = context.WithCancel(context.Background())
+
for _, opt := range opts {
opt(&cc.dopts)
}
+ cc.mkp = cc.dopts.copts.KeepaliveParams
+
+ if cc.dopts.copts.Dialer == nil {
+ cc.dopts.copts.Dialer = newProxyDialer(
+ func(ctx context.Context, addr string) (net.Conn, error) {
+ return dialContext(ctx, "tcp", addr)
+ },
+ )
+ }
+
+ if cc.dopts.copts.UserAgent != "" {
+ cc.dopts.copts.UserAgent += " " + grpcUA
+ } else {
+ cc.dopts.copts.UserAgent = grpcUA
+ }
+
if cc.dopts.timeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, cc.dopts.timeout)
@@ -300,15 +355,16 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
}
}()
+ scSet := false
if cc.dopts.scChan != nil {
- // Wait for the initial service config.
+ // Try to get an initial service config.
select {
case sc, ok := <-cc.dopts.scChan:
if ok {
cc.sc = sc
+ scSet = true
}
- case <-ctx.Done():
- return nil, ctx.Err()
+ default:
}
}
// Set defaults.
@@ -321,54 +377,47 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
creds := cc.dopts.copts.TransportCredentials
if creds != nil && creds.Info().ServerName != "" {
cc.authority = creds.Info().ServerName
+ } else if cc.dopts.insecure && cc.dopts.copts.Authority != "" {
+ cc.authority = cc.dopts.copts.Authority
} else {
- colonPos := strings.LastIndex(target, ":")
- if colonPos == -1 {
- colonPos = len(target)
- }
- cc.authority = target[:colonPos]
+ cc.authority = target
}
- var ok bool
waitC := make(chan error, 1)
go func() {
- var addrs []Address
+ defer close(waitC)
if cc.dopts.balancer == nil && cc.sc.LB != nil {
cc.dopts.balancer = cc.sc.LB
}
- if cc.dopts.balancer == nil {
- // Connect to target directly if balancer is nil.
- addrs = append(addrs, Address{Addr: target})
- } else {
+ if cc.dopts.balancer != nil {
var credsClone credentials.TransportCredentials
if creds != nil {
credsClone = creds.Clone()
}
config := BalancerConfig{
DialCreds: credsClone,
+ Dialer: cc.dopts.copts.Dialer,
}
if err := cc.dopts.balancer.Start(target, config); err != nil {
waitC <- err
return
}
ch := cc.dopts.balancer.Notify()
- if ch == nil {
- // There is no name resolver installed.
- addrs = append(addrs, Address{Addr: target})
- } else {
- addrs, ok = <-ch
- if !ok || len(addrs) == 0 {
- waitC <- errNoAddr
- return
+ if ch != nil {
+ if cc.dopts.block {
+ doneChan := make(chan struct{})
+ go cc.lbWatcher(doneChan)
+ <-doneChan
+ } else {
+ go cc.lbWatcher(nil)
}
- }
- }
- for _, a := range addrs {
- if err := cc.resetAddrConn(a, false, nil); err != nil {
- waitC <- err
return
}
}
- close(waitC)
+ // No balancer, or no resolver within the balancer. Connect directly.
+ if err := cc.resetAddrConn(Address{Addr: target}, cc.dopts.block, nil); err != nil {
+ waitC <- err
+ return
+ }
}()
select {
case <-ctx.Done():
@@ -378,50 +427,113 @@ func DialContext(ctx context.Context, target string, opts ...DialOption) (conn *
return nil, err
}
}
-
- // If balancer is nil or balancer.Notify() is nil, ok will be false here.
- // The lbWatcher goroutine will not be created.
- if ok {
- go cc.lbWatcher()
+ if cc.dopts.scChan != nil && !scSet {
+ // Blocking wait for the initial service config.
+ select {
+ case sc, ok := <-cc.dopts.scChan:
+ if ok {
+ cc.sc = sc
+ }
+ case <-ctx.Done():
+ return nil, ctx.Err()
+ }
}
-
if cc.dopts.scChan != nil {
go cc.scWatcher()
}
+
return cc, nil
}
-// ConnectivityState indicates the state of a client connection.
-type ConnectivityState int
+// connectivityStateEvaluator gets updated by addrConns when their
+// states transition, based on which it evaluates the state of
+// ClientConn.
+// Note: This code will eventually sit in the balancer in the new design.
+type connectivityStateEvaluator struct {
+ csMgr *connectivityStateManager
+ mu sync.Mutex
+ numReady uint64 // Number of addrConns in ready state.
+ numConnecting uint64 // Number of addrConns in connecting state.
+ numTransientFailure uint64 // Number of addrConns in transientFailure.
+}
-const (
- // Idle indicates the ClientConn is idle.
- Idle ConnectivityState = iota
- // Connecting indicates the ClienConn is connecting.
- Connecting
- // Ready indicates the ClientConn is ready for work.
- Ready
- // TransientFailure indicates the ClientConn has seen a failure but expects to recover.
- TransientFailure
- // Shutdown indicates the ClientConn has started shutting down.
- Shutdown
-)
+// recordTransition records state change happening in every addrConn and based on
+// that it evaluates what state the ClientConn is in.
+// It can only transition between connectivity.Ready, connectivity.Connecting and connectivity.TransientFailure. Other states,
+// Idle and connectivity.Shutdown are transitioned into by ClientConn; in the begining of the connection
+// before any addrConn is created ClientConn is in idle state. In the end when ClientConn
+// closes it is in connectivity.Shutdown state.
+// TODO Note that in later releases, a ClientConn with no activity will be put into an Idle state.
+func (cse *connectivityStateEvaluator) recordTransition(oldState, newState connectivity.State) {
+ cse.mu.Lock()
+ defer cse.mu.Unlock()
-func (s ConnectivityState) String() string {
- switch s {
- case Idle:
- return "IDLE"
- case Connecting:
- return "CONNECTING"
- case Ready:
- return "READY"
- case TransientFailure:
- return "TRANSIENT_FAILURE"
- case Shutdown:
- return "SHUTDOWN"
- default:
- panic(fmt.Sprintf("unknown connectivity state: %d", s))
+ // Update counters.
+ for idx, state := range []connectivity.State{oldState, newState} {
+ updateVal := 2*uint64(idx) - 1 // -1 for oldState and +1 for new.
+ switch state {
+ case connectivity.Ready:
+ cse.numReady += updateVal
+ case connectivity.Connecting:
+ cse.numConnecting += updateVal
+ case connectivity.TransientFailure:
+ cse.numTransientFailure += updateVal
+ }
}
+
+ // Evaluate.
+ if cse.numReady > 0 {
+ cse.csMgr.updateState(connectivity.Ready)
+ return
+ }
+ if cse.numConnecting > 0 {
+ cse.csMgr.updateState(connectivity.Connecting)
+ return
+ }
+ cse.csMgr.updateState(connectivity.TransientFailure)
+}
+
+// connectivityStateManager keeps the connectivity.State of ClientConn.
+// This struct will eventually be exported so the balancers can access it.
+type connectivityStateManager struct {
+ mu sync.Mutex
+ state connectivity.State
+ notifyChan chan struct{}
+}
+
+// updateState updates the connectivity.State of ClientConn.
+// If there's a change it notifies goroutines waiting on state change to
+// happen.
+func (csm *connectivityStateManager) updateState(state connectivity.State) {
+ csm.mu.Lock()
+ defer csm.mu.Unlock()
+ if csm.state == connectivity.Shutdown {
+ return
+ }
+ if csm.state == state {
+ return
+ }
+ csm.state = state
+ if csm.notifyChan != nil {
+ // There are other goroutines waiting on this channel.
+ close(csm.notifyChan)
+ csm.notifyChan = nil
+ }
+}
+
+func (csm *connectivityStateManager) getState() connectivity.State {
+ csm.mu.Lock()
+ defer csm.mu.Unlock()
+ return csm.state
+}
+
+func (csm *connectivityStateManager) getNotifyChan() <-chan struct{} {
+ csm.mu.Lock()
+ defer csm.mu.Unlock()
+ if csm.notifyChan == nil {
+ csm.notifyChan = make(chan struct{})
+ }
+ return csm.notifyChan
}
// ClientConn represents a client connection to an RPC server.
@@ -432,13 +544,51 @@ type ClientConn struct {
target string
authority string
dopts dialOptions
+ csMgr *connectivityStateManager
+ csEvltr *connectivityStateEvaluator // This will eventually be part of balancer.
mu sync.RWMutex
sc ServiceConfig
conns map[Address]*addrConn
+ // Keepalive parameter can be updated if a GoAway is received.
+ mkp keepalive.ClientParameters
}
-func (cc *ClientConn) lbWatcher() {
+// WaitForStateChange waits until the connectivity.State of ClientConn changes from sourceState or
+// ctx expires. A true value is returned in former case and false in latter.
+// This is an EXPERIMENTAL API.
+func (cc *ClientConn) WaitForStateChange(ctx context.Context, sourceState connectivity.State) bool {
+ ch := cc.csMgr.getNotifyChan()
+ if cc.csMgr.getState() != sourceState {
+ return true
+ }
+ select {
+ case <-ctx.Done():
+ return false
+ case <-ch:
+ return true
+ }
+}
+
+// GetState returns the connectivity.State of ClientConn.
+// This is an EXPERIMENTAL API.
+func (cc *ClientConn) GetState() connectivity.State {
+ return cc.csMgr.getState()
+}
+
+// lbWatcher watches the Notify channel of the balancer in cc and manages
+// connections accordingly. If doneChan is not nil, it is closed after the
+// first successfull connection is made.
+func (cc *ClientConn) lbWatcher(doneChan chan struct{}) {
+ defer func() {
+ // In case channel from cc.dopts.balancer.Notify() gets closed before a
+ // successful connection gets established, don't forget to notify the
+ // caller.
+ if doneChan != nil {
+ close(doneChan)
+ }
+ }()
+
for addrs := range cc.dopts.balancer.Notify() {
var (
add []Address // Addresses need to setup connections.
@@ -465,7 +615,19 @@ func (cc *ClientConn) lbWatcher() {
}
cc.mu.Unlock()
for _, a := range add {
- cc.resetAddrConn(a, true, nil)
+ var err error
+ if doneChan != nil {
+ err = cc.resetAddrConn(a, true, nil)
+ if err == nil {
+ close(doneChan)
+ doneChan = nil
+ }
+ } else {
+ err = cc.resetAddrConn(a, false, nil)
+ }
+ if err != nil {
+ grpclog.Warningf("Error creating connection to %v. Err: %v", a, err)
+ }
}
for _, c := range del {
c.tearDown(errConnDrain)
@@ -494,14 +656,18 @@ func (cc *ClientConn) scWatcher() {
// resetAddrConn creates an addrConn for addr and adds it to cc.conns.
// If there is an old addrConn for addr, it will be torn down, using tearDownErr as the reason.
// If tearDownErr is nil, errConnDrain will be used instead.
-func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr error) error {
+//
+// We should never need to replace an addrConn with a new one. This function is only used
+// as newAddrConn to create new addrConn.
+// TODO rename this function and clean up the code.
+func (cc *ClientConn) resetAddrConn(addr Address, block bool, tearDownErr error) error {
ac := &addrConn{
cc: cc,
addr: addr,
dopts: cc.dopts,
}
ac.ctx, ac.cancel = context.WithCancel(cc.ctx)
- ac.stateCV = sync.NewCond(&ac.mu)
+ ac.csEvltr = cc.csEvltr
if EnableTracing {
ac.events = trace.NewEventLog("grpc.ClientConn", ac.addr.Addr)
}
@@ -530,10 +696,7 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
cc.mu.Unlock()
if stale != nil {
// There is an addrConn alive on ac.addr already. This could be due to
- // 1) a buggy Balancer notifies duplicated Addresses;
- // 2) goaway was received, a new ac will replace the old ac.
- // The old ac should be deleted from cc.conns, but the
- // underlying transport should drain rather than close.
+ // a buggy Balancer that reports duplicated Addresses.
if tearDownErr == nil {
// tearDownErr is nil if resetAddrConn is called by
// 1) Dial
@@ -544,8 +707,7 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
stale.tearDown(tearDownErr)
}
}
- // skipWait may overwrite the decision in ac.dopts.block.
- if ac.dopts.block && !skipWait {
+ if block {
if err := ac.resetTransport(false); err != nil {
if err != errConnClosing {
// Tear down ac and delete it from cc.conns.
@@ -565,7 +727,7 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
// Start a goroutine connecting to the server asynchronously.
go func() {
if err := ac.resetTransport(false); err != nil {
- grpclog.Printf("Failed to dial %s: %v; please retry.", ac.addr.Addr, err)
+ grpclog.Warningf("Failed to dial %s: %v; please retry.", ac.addr.Addr, err)
if err != errConnClosing {
// Keep this ac in cc.conns, to get the reason it's torn down.
ac.tearDown(err)
@@ -578,12 +740,23 @@ func (cc *ClientConn) resetAddrConn(addr Address, skipWait bool, tearDownErr err
return nil
}
-// TODO: Avoid the locking here.
-func (cc *ClientConn) getMethodConfig(method string) (m MethodConfig, ok bool) {
+// GetMethodConfig gets the method config of the input method.
+// If there's an exact match for input method (i.e. /service/method), we return
+// the corresponding MethodConfig.
+// If there isn't an exact match for the input method, we look for the default config
+// under the service (i.e /service/). If there is a default MethodConfig for
+// the serivce, we return it.
+// Otherwise, we return an empty MethodConfig.
+func (cc *ClientConn) GetMethodConfig(method string) MethodConfig {
+ // TODO: Avoid the locking here.
cc.mu.RLock()
defer cc.mu.RUnlock()
- m, ok = cc.sc.Methods[method]
- return
+ m, ok := cc.sc.Methods[method]
+ if !ok {
+ i := strings.LastIndex(method, "/")
+ m, _ = cc.sc.Methods[method[:i+1]]
+ }
+ return m
}
func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions) (transport.ClientTransport, func(), error) {
@@ -624,6 +797,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
}
if !ok {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
put()
}
return nil, nil, errConnClosing
@@ -631,6 +805,7 @@ func (cc *ClientConn) getTransport(ctx context.Context, opts BalancerGetOptions)
t, err := ac.wait(ctx, cc.dopts.balancer != nil, !opts.BlockingWait)
if err != nil {
if put != nil {
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: false, bytesReceived: false})
put()
}
return nil, nil, err
@@ -649,6 +824,7 @@ func (cc *ClientConn) Close() error {
}
conns := cc.conns
cc.conns = nil
+ cc.csMgr.updateState(connectivity.Shutdown)
cc.mu.Unlock()
if cc.dopts.balancer != nil {
cc.dopts.balancer.Close()
@@ -669,10 +845,11 @@ type addrConn struct {
dopts dialOptions
events trace.EventLog
- mu sync.Mutex
- state ConnectivityState
- stateCV *sync.Cond
- down func(error) // the handler called when a connection is down.
+ csEvltr *connectivityStateEvaluator
+
+ mu sync.Mutex
+ state connectivity.State
+ down func(error) // the handler called when a connection is down.
// ready is closed and becomes nil when a new transport is up or failed
// due to timeout.
ready chan struct{}
@@ -682,6 +859,20 @@ type addrConn struct {
tearDownErr error
}
+// adjustParams updates parameters used to create transports upon
+// receiving a GoAway.
+func (ac *addrConn) adjustParams(r transport.GoAwayReason) {
+ switch r {
+ case transport.TooManyPings:
+ v := 2 * ac.dopts.copts.KeepaliveParams.Time
+ ac.cc.mu.Lock()
+ if v > ac.cc.mkp.Time {
+ ac.cc.mkp.Time = v
+ }
+ ac.cc.mu.Unlock()
+ }
+}
+
// printf records an event in ac's event log, unless ac has been closed.
// REQUIRES ac.mu is held.
func (ac *addrConn) printf(format string, a ...interface{}) {
@@ -698,62 +889,41 @@ func (ac *addrConn) errorf(format string, a ...interface{}) {
}
}
-// getState returns the connectivity state of the Conn
-func (ac *addrConn) getState() ConnectivityState {
- ac.mu.Lock()
- defer ac.mu.Unlock()
- return ac.state
-}
-
-// waitForStateChange blocks until the state changes to something other than the sourceState.
-func (ac *addrConn) waitForStateChange(ctx context.Context, sourceState ConnectivityState) (ConnectivityState, error) {
+// resetTransport recreates a transport to the address for ac.
+// For the old transport:
+// - if drain is true, it will be gracefully closed.
+// - otherwise, it will be closed.
+func (ac *addrConn) resetTransport(drain bool) error {
ac.mu.Lock()
- defer ac.mu.Unlock()
- if sourceState != ac.state {
- return ac.state, nil
+ if ac.state == connectivity.Shutdown {
+ ac.mu.Unlock()
+ return errConnClosing
}
- done := make(chan struct{})
- var err error
- go func() {
- select {
- case <-ctx.Done():
- ac.mu.Lock()
- err = ctx.Err()
- ac.stateCV.Broadcast()
- ac.mu.Unlock()
- case <-done:
- }
- }()
- defer close(done)
- for sourceState == ac.state {
- ac.stateCV.Wait()
- if err != nil {
- return ac.state, err
- }
+ ac.printf("connecting")
+ if ac.down != nil {
+ ac.down(downErrorf(false, true, "%v", errNetworkIO))
+ ac.down = nil
}
- return ac.state, nil
-}
-
-func (ac *addrConn) resetTransport(closeTransport bool) error {
+ oldState := ac.state
+ ac.state = connectivity.Connecting
+ ac.csEvltr.recordTransition(oldState, ac.state)
+ t := ac.transport
+ ac.transport = nil
+ ac.mu.Unlock()
+ if t != nil && !drain {
+ t.Close()
+ }
+ ac.cc.mu.RLock()
+ ac.dopts.copts.KeepaliveParams = ac.cc.mkp
+ ac.cc.mu.RUnlock()
for retries := 0; ; retries++ {
ac.mu.Lock()
- ac.printf("connecting")
- if ac.state == Shutdown {
+ if ac.state == connectivity.Shutdown {
// ac.tearDown(...) has been invoked.
ac.mu.Unlock()
return errConnClosing
}
- if ac.down != nil {
- ac.down(downErrorf(false, true, "%v", errNetworkIO))
- ac.down = nil
- }
- ac.state = Connecting
- ac.stateCV.Broadcast()
- t := ac.transport
ac.mu.Unlock()
- if closeTransport && t != nil {
- t.Close()
- }
sleepTime := ac.dopts.bs.backoff(retries)
timeout := minConnectTimeout
if timeout < sleepTime {
@@ -766,45 +936,51 @@ func (ac *addrConn) resetTransport(closeTransport bool) error {
Metadata: ac.addr.Metadata,
}
newTransport, err := transport.NewClientTransport(ctx, sinfo, ac.dopts.copts)
+ // Don't call cancel in success path due to a race in Go 1.6:
+ // https://github.com/golang/go/issues/15078.
if err != nil {
cancel()
if e, ok := err.(transport.ConnectionError); ok && !e.Temporary() {
return err
}
- grpclog.Printf("grpc: addrConn.resetTransport failed to create client transport: %v; Reconnecting to %v", err, ac.addr)
+ grpclog.Warningf("grpc: addrConn.resetTransport failed to create client transport: %v; Reconnecting to %v", err, ac.addr)
ac.mu.Lock()
- if ac.state == Shutdown {
+ if ac.state == connectivity.Shutdown {
// ac.tearDown(...) has been invoked.
ac.mu.Unlock()
return errConnClosing
}
ac.errorf("transient failure: %v", err)
- ac.state = TransientFailure
- ac.stateCV.Broadcast()
+ oldState = ac.state
+ ac.state = connectivity.TransientFailure
+ ac.csEvltr.recordTransition(oldState, ac.state)
if ac.ready != nil {
close(ac.ready)
ac.ready = nil
}
ac.mu.Unlock()
- closeTransport = false
+ timer := time.NewTimer(sleepTime - time.Since(connectTime))
select {
- case <-time.After(sleepTime - time.Since(connectTime)):
+ case <-timer.C:
case <-ac.ctx.Done():
+ timer.Stop()
return ac.ctx.Err()
}
+ timer.Stop()
continue
}
ac.mu.Lock()
ac.printf("ready")
- if ac.state == Shutdown {
+ if ac.state == connectivity.Shutdown {
// ac.tearDown(...) has been invoked.
ac.mu.Unlock()
newTransport.Close()
return errConnClosing
}
- ac.state = Ready
- ac.stateCV.Broadcast()
+ oldState = ac.state
+ ac.state = connectivity.Ready
+ ac.csEvltr.recordTransition(oldState, ac.state)
ac.transport = newTransport
if ac.ready != nil {
close(ac.ready)
@@ -836,43 +1012,59 @@ func (ac *addrConn) transportMonitor() {
}
return
case <-t.GoAway():
- // If GoAway happens without any network I/O error, ac is closed without shutting down the
- // underlying transport (the transport will be closed when all the pending RPCs finished or
- // failed.).
- // If GoAway and some network I/O error happen concurrently, ac and its underlying transport
- // are closed.
- // In both cases, a new ac is created.
+ ac.adjustParams(t.GetGoAwayReason())
+ // If GoAway happens without any network I/O error, the underlying transport
+ // will be gracefully closed, and a new transport will be created.
+ // (The transport will be closed when all the pending RPCs finished or failed.)
+ // If GoAway and some network I/O error happen concurrently, the underlying transport
+ // will be closed, and a new transport will be created.
+ var drain bool
select {
case <-t.Error():
- ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
default:
- ac.cc.resetAddrConn(ac.addr, true, errConnDrain)
+ drain = true
+ }
+ if err := ac.resetTransport(drain); err != nil {
+ grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err)
+ if err != errConnClosing {
+ // Keep this ac in cc.conns, to get the reason it's torn down.
+ ac.tearDown(err)
+ }
+ return
}
- return
case <-t.Error():
select {
case <-ac.ctx.Done():
t.Close()
return
case <-t.GoAway():
- ac.cc.resetAddrConn(ac.addr, true, errNetworkIO)
- return
+ ac.adjustParams(t.GetGoAwayReason())
+ if err := ac.resetTransport(false); err != nil {
+ grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err)
+ if err != errConnClosing {
+ // Keep this ac in cc.conns, to get the reason it's torn down.
+ ac.tearDown(err)
+ }
+ return
+ }
default:
}
ac.mu.Lock()
- if ac.state == Shutdown {
+ if ac.state == connectivity.Shutdown {
// ac has been shutdown.
ac.mu.Unlock()
return
}
- ac.state = TransientFailure
- ac.stateCV.Broadcast()
+ oldState := ac.state
+ ac.state = connectivity.TransientFailure
+ ac.csEvltr.recordTransition(oldState, ac.state)
ac.mu.Unlock()
- if err := ac.resetTransport(true); err != nil {
+ if err := ac.resetTransport(false); err != nil {
+ grpclog.Infof("get error from resetTransport %v, transportMonitor returning", err)
ac.mu.Lock()
ac.printf("transport exiting: %v", err)
ac.mu.Unlock()
- grpclog.Printf("grpc: addrConn.transportMonitor exits due to: %v", err)
+ grpclog.Warningf("grpc: addrConn.transportMonitor exits due to: %v", err)
if err != errConnClosing {
// Keep this ac in cc.conns, to get the reason it's torn down.
ac.tearDown(err)
@@ -884,12 +1076,12 @@ func (ac *addrConn) transportMonitor() {
}
// wait blocks until i) the new transport is up or ii) ctx is done or iii) ac is closed or
-// iv) transport is in TransientFailure and there is a balancer/failfast is true.
+// iv) transport is in connectivity.TransientFailure and there is a balancer/failfast is true.
func (ac *addrConn) wait(ctx context.Context, hasBalancer, failfast bool) (transport.ClientTransport, error) {
for {
ac.mu.Lock()
switch {
- case ac.state == Shutdown:
+ case ac.state == connectivity.Shutdown:
if failfast || !hasBalancer {
// RPC is failfast or balancer is nil. This RPC should fail with ac.tearDownErr.
err := ac.tearDownErr
@@ -898,11 +1090,11 @@ func (ac *addrConn) wait(ctx context.Context, hasBalancer, failfast bool) (trans
}
ac.mu.Unlock()
return nil, errConnClosing
- case ac.state == Ready:
+ case ac.state == connectivity.Ready:
ct := ac.transport
ac.mu.Unlock()
return ct, nil
- case ac.state == TransientFailure:
+ case ac.state == connectivity.TransientFailure:
if failfast || hasBalancer {
ac.mu.Unlock()
return nil, errConnUnavailable
@@ -944,12 +1136,13 @@ func (ac *addrConn) tearDown(err error) {
// address removal and GoAway.
ac.transport.GracefulClose()
}
- if ac.state == Shutdown {
+ if ac.state == connectivity.Shutdown {
return
}
- ac.state = Shutdown
+ oldState := ac.state
+ ac.state = connectivity.Shutdown
ac.tearDownErr = err
- ac.stateCV.Broadcast()
+ ac.csEvltr.recordTransition(oldState, ac.state)
if ac.events != nil {
ac.events.Finish()
ac.events = nil
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codec.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codec.go
new file mode 100644
index 00000000..905b048e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codec.go
@@ -0,0 +1,104 @@
+/*
+ *
+ * Copyright 2014 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpc
+
+import (
+ "math"
+ "sync"
+
+ "github.com/golang/protobuf/proto"
+)
+
+// Codec defines the interface gRPC uses to encode and decode messages.
+// Note that implementations of this interface must be thread safe;
+// a Codec's methods can be called from concurrent goroutines.
+type Codec interface {
+ // Marshal returns the wire format of v.
+ Marshal(v interface{}) ([]byte, error)
+ // Unmarshal parses the wire format into v.
+ Unmarshal(data []byte, v interface{}) error
+ // String returns the name of the Codec implementation. The returned
+ // string will be used as part of content type in transmission.
+ String() string
+}
+
+// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
+type protoCodec struct {
+}
+
+type cachedProtoBuffer struct {
+ lastMarshaledSize uint32
+ proto.Buffer
+}
+
+func capToMaxInt32(val int) uint32 {
+ if val > math.MaxInt32 {
+ return uint32(math.MaxInt32)
+ }
+ return uint32(val)
+}
+
+func (p protoCodec) marshal(v interface{}, cb *cachedProtoBuffer) ([]byte, error) {
+ protoMsg := v.(proto.Message)
+ newSlice := make([]byte, 0, cb.lastMarshaledSize)
+
+ cb.SetBuf(newSlice)
+ cb.Reset()
+ if err := cb.Marshal(protoMsg); err != nil {
+ return nil, err
+ }
+ out := cb.Bytes()
+ cb.lastMarshaledSize = capToMaxInt32(len(out))
+ return out, nil
+}
+
+func (p protoCodec) Marshal(v interface{}) ([]byte, error) {
+ cb := protoBufferPool.Get().(*cachedProtoBuffer)
+ out, err := p.marshal(v, cb)
+
+ // put back buffer and lose the ref to the slice
+ cb.SetBuf(nil)
+ protoBufferPool.Put(cb)
+ return out, err
+}
+
+func (p protoCodec) Unmarshal(data []byte, v interface{}) error {
+ cb := protoBufferPool.Get().(*cachedProtoBuffer)
+ cb.SetBuf(data)
+ v.(proto.Message).Reset()
+ err := cb.Unmarshal(v.(proto.Message))
+ cb.SetBuf(nil)
+ protoBufferPool.Put(cb)
+ return err
+}
+
+func (protoCodec) String() string {
+ return "proto"
+}
+
+var (
+ protoBufferPool = &sync.Pool{
+ New: func() interface{} {
+ return &cachedProtoBuffer{
+ Buffer: proto.Buffer{},
+ lastMarshaledSize: 16,
+ }
+ },
+ }
+)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codes/codes.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codes/codes.go
index e14b464a..21e7733a 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codes/codes.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/codes/codes.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -44,7 +29,7 @@ const (
// OK is returned on success.
OK Code = 0
- // Canceled indicates the operation was cancelled (typically by the caller).
+ // Canceled indicates the operation was canceled (typically by the caller).
Canceled Code = 1
// Unknown error. An example of where this error may be returned is
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/connectivity/connectivity.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/connectivity/connectivity.go
new file mode 100644
index 00000000..568ef5dc
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/connectivity/connectivity.go
@@ -0,0 +1,72 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// Package connectivity defines connectivity semantics.
+// For details, see https://github.com/grpc/grpc/blob/master/doc/connectivity-semantics-and-api.md.
+// All APIs in this package are experimental.
+package connectivity
+
+import (
+ "golang.org/x/net/context"
+ "google.golang.org/grpc/grpclog"
+)
+
+// State indicates the state of connectivity.
+// It can be the state of a ClientConn or SubConn.
+type State int
+
+func (s State) String() string {
+ switch s {
+ case Idle:
+ return "IDLE"
+ case Connecting:
+ return "CONNECTING"
+ case Ready:
+ return "READY"
+ case TransientFailure:
+ return "TRANSIENT_FAILURE"
+ case Shutdown:
+ return "SHUTDOWN"
+ default:
+ grpclog.Errorf("unknown connectivity state: %d", s)
+ return "Invalid-State"
+ }
+}
+
+const (
+ // Idle indicates the ClientConn is idle.
+ Idle State = iota
+ // Connecting indicates the ClienConn is connecting.
+ Connecting
+ // Ready indicates the ClientConn is ready for work.
+ Ready
+ // TransientFailure indicates the ClientConn has seen a failure but expects to recover.
+ TransientFailure
+ // Shutdown indicates the ClientConn has started shutting down.
+ Shutdown
+)
+
+// Reporter reports the connectivity states.
+type Reporter interface {
+ // CurrentState returns the current state of the reporter.
+ CurrentState() State
+ // WaitForStateChange blocks until the reporter's state is different from the given state,
+ // and returns true.
+ // It returns false if <-ctx.Done() can proceed (ctx got timeout or got canceled).
+ WaitForStateChange(context.Context, State) bool
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials.go
index 4d45c3e3..2475fe83 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -102,6 +87,10 @@ type TransportCredentials interface {
// authentication protocol on rawConn for clients. It returns the authenticated
// connection and the corresponding auth information about the connection.
// Implementations must use the provided context to implement timely cancellation.
+ // gRPC will try to reconnect if the error returned is a temporary error
+ // (io.EOF, context.DeadlineExceeded or err.Temporary() == true).
+ // If the returned error is a wrapper error, implementations should make sure that
+ // the error implements Temporary() to have the correct retry behaviors.
ClientHandshake(context.Context, string, net.Conn) (net.Conn, AuthInfo, error)
// ServerHandshake does the authentication handshake for servers. It returns
// the authenticated connection and the corresponding auth information about
@@ -192,14 +181,14 @@ func NewTLS(c *tls.Config) TransportCredentials {
return tc
}
-// NewClientTLSFromCert constructs a TLS from the input certificate for client.
+// NewClientTLSFromCert constructs TLS credentials from the input certificate for client.
// serverNameOverride is for testing only. If set to a non empty string,
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
func NewClientTLSFromCert(cp *x509.CertPool, serverNameOverride string) TransportCredentials {
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp})
}
-// NewClientTLSFromFile constructs a TLS from the input certificate file for client.
+// NewClientTLSFromFile constructs TLS credentials from the input certificate file for client.
// serverNameOverride is for testing only. If set to a non empty string,
// it will override the virtual host name of authority (e.g. :authority header field) in requests.
func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredentials, error) {
@@ -214,12 +203,12 @@ func NewClientTLSFromFile(certFile, serverNameOverride string) (TransportCredent
return NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp}), nil
}
-// NewServerTLSFromCert constructs a TLS from the input certificate for server.
+// NewServerTLSFromCert constructs TLS credentials from the input certificate for server.
func NewServerTLSFromCert(cert *tls.Certificate) TransportCredentials {
return NewTLS(&tls.Config{Certificates: []tls.Certificate{*cert}})
}
-// NewServerTLSFromFile constructs a TLS from the input certificate file and key
+// NewServerTLSFromFile constructs TLS credentials from the input certificate file and key
// file for server.
func NewServerTLSFromFile(certFile, keyFile string) (TransportCredentials, error) {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go17.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go17.go
index 9647b9ec..60409aac 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go17.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go17.go
@@ -1,35 +1,21 @@
// +build go1.7
+// +build !go1.8
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -44,8 +30,6 @@ import (
// contains a mutex and must not be copied.
//
// If cfg is nil, a new zero tls.Config is returned.
-//
-// TODO replace this function with official clone function.
func cloneTLSConfig(cfg *tls.Config) *tls.Config {
if cfg == nil {
return &tls.Config{}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go18.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go18.go
new file mode 100644
index 00000000..93f0e1d8
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_go18.go
@@ -0,0 +1,38 @@
+// +build go1.8
+
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package credentials
+
+import (
+ "crypto/tls"
+)
+
+// cloneTLSConfig returns a shallow clone of the exported
+// fields of cfg, ignoring the unexported sync.Once, which
+// contains a mutex and must not be copied.
+//
+// If cfg is nil, a new zero tls.Config is returned.
+func cloneTLSConfig(cfg *tls.Config) *tls.Config {
+ if cfg == nil {
+ return &tls.Config{}
+ }
+
+ return cfg.Clone()
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_pre_go17.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_pre_go17.go
index 09b8d12c..d6bbcc9f 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_pre_go17.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/credentials/credentials_util_pre_go17.go
@@ -2,34 +2,19 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -44,8 +29,6 @@ import (
// contains a mutex and must not be copied.
//
// If cfg is nil, a new zero tls.Config is returned.
-//
-// TODO replace this function with official clone function.
func cloneTLSConfig(cfg *tls.Config) *tls.Config {
if cfg == nil {
return &tls.Config{}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/doc.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/doc.go
index a35f2188..187adbb1 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/doc.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/doc.go
@@ -1,6 +1,24 @@
/*
+ *
+ * Copyright 2015 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+/*
Package grpc implements an RPC system called gRPC.
-See www.grpc.io for more information about gRPC.
+See grpc.io for more information about gRPC.
*/
package grpc // import "google.golang.org/grpc"
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go16.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go16.go
new file mode 100644
index 00000000..f3dbf217
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go16.go
@@ -0,0 +1,98 @@
+// +build go1.6,!go1.7
+
+/*
+ *
+ * Copyright 2016 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpc
+
+import (
+ "fmt"
+ "io"
+ "net"
+ "net/http"
+ "os"
+
+ "golang.org/x/net/context"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+ "google.golang.org/grpc/transport"
+)
+
+// dialContext connects to the address on the named network.
+func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
+ return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address)
+}
+
+func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
+ req.Cancel = ctx.Done()
+ if err := req.Write(conn); err != nil {
+ return fmt.Errorf("failed to write the HTTP request: %v", err)
+ }
+ return nil
+}
+
+// toRPCErr converts an error into an error from the status package.
+func toRPCErr(err error) error {
+ if _, ok := status.FromError(err); ok {
+ return err
+ }
+ switch e := err.(type) {
+ case transport.StreamError:
+ return status.Error(e.Code, e.Desc)
+ case transport.ConnectionError:
+ return status.Error(codes.Unavailable, e.Desc)
+ default:
+ switch err {
+ case context.DeadlineExceeded:
+ return status.Error(codes.DeadlineExceeded, err.Error())
+ case context.Canceled:
+ return status.Error(codes.Canceled, err.Error())
+ case ErrClientConnClosing:
+ return status.Error(codes.FailedPrecondition, err.Error())
+ }
+ }
+ return status.Error(codes.Unknown, err.Error())
+}
+
+// convertCode converts a standard Go error into its canonical code. Note that
+// this is only used to translate the error returned by the server applications.
+func convertCode(err error) codes.Code {
+ switch err {
+ case nil:
+ return codes.OK
+ case io.EOF:
+ return codes.OutOfRange
+ case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
+ return codes.FailedPrecondition
+ case os.ErrInvalid:
+ return codes.InvalidArgument
+ case context.Canceled:
+ return codes.Canceled
+ case context.DeadlineExceeded:
+ return codes.DeadlineExceeded
+ }
+ switch {
+ case os.IsExist(err):
+ return codes.AlreadyExists
+ case os.IsNotExist(err):
+ return codes.NotFound
+ case os.IsPermission(err):
+ return codes.PermissionDenied
+ }
+ return codes.Unknown
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go17.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go17.go
new file mode 100644
index 00000000..a3421d99
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/go17.go
@@ -0,0 +1,98 @@
+// +build go1.7
+
+/*
+ *
+ * Copyright 2016 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpc
+
+import (
+ "context"
+ "io"
+ "net"
+ "net/http"
+ "os"
+
+ netctx "golang.org/x/net/context"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
+ "google.golang.org/grpc/transport"
+)
+
+// dialContext connects to the address on the named network.
+func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
+ return (&net.Dialer{}).DialContext(ctx, network, address)
+}
+
+func sendHTTPRequest(ctx context.Context, req *http.Request, conn net.Conn) error {
+ req = req.WithContext(ctx)
+ if err := req.Write(conn); err != nil {
+ return err
+ }
+ return nil
+}
+
+// toRPCErr converts an error into an error from the status package.
+func toRPCErr(err error) error {
+ if _, ok := status.FromError(err); ok {
+ return err
+ }
+ switch e := err.(type) {
+ case transport.StreamError:
+ return status.Error(e.Code, e.Desc)
+ case transport.ConnectionError:
+ return status.Error(codes.Unavailable, e.Desc)
+ default:
+ switch err {
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return status.Error(codes.DeadlineExceeded, err.Error())
+ case context.Canceled, netctx.Canceled:
+ return status.Error(codes.Canceled, err.Error())
+ case ErrClientConnClosing:
+ return status.Error(codes.FailedPrecondition, err.Error())
+ }
+ }
+ return status.Error(codes.Unknown, err.Error())
+}
+
+// convertCode converts a standard Go error into its canonical code. Note that
+// this is only used to translate the error returned by the server applications.
+func convertCode(err error) codes.Code {
+ switch err {
+ case nil:
+ return codes.OK
+ case io.EOF:
+ return codes.OutOfRange
+ case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
+ return codes.FailedPrecondition
+ case os.ErrInvalid:
+ return codes.InvalidArgument
+ case context.Canceled, netctx.Canceled:
+ return codes.Canceled
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return codes.DeadlineExceeded
+ }
+ switch {
+ case os.IsExist(err):
+ return codes.AlreadyExists
+ case os.IsNotExist(err):
+ return codes.NotFound
+ case os.IsPermission(err):
+ return codes.PermissionDenied
+ }
+ return codes.Unknown
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb.go
new file mode 100644
index 00000000..f7b6b7da
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb.go
@@ -0,0 +1,737 @@
+/*
+ *
+ * Copyright 2016 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpc
+
+import (
+ "errors"
+ "fmt"
+ "math/rand"
+ "net"
+ "sync"
+ "time"
+
+ "golang.org/x/net/context"
+ "google.golang.org/grpc/codes"
+ lbpb "google.golang.org/grpc/grpclb/grpc_lb_v1"
+ "google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/metadata"
+ "google.golang.org/grpc/naming"
+)
+
+// Client API for LoadBalancer service.
+// Mostly copied from generated pb.go file.
+// To avoid circular dependency.
+type loadBalancerClient struct {
+ cc *ClientConn
+}
+
+func (c *loadBalancerClient) BalanceLoad(ctx context.Context, opts ...CallOption) (*balanceLoadClientStream, error) {
+ desc := &StreamDesc{
+ StreamName: "BalanceLoad",
+ ServerStreams: true,
+ ClientStreams: true,
+ }
+ stream, err := NewClientStream(ctx, desc, c.cc, "/grpc.lb.v1.LoadBalancer/BalanceLoad", opts...)
+ if err != nil {
+ return nil, err
+ }
+ x := &balanceLoadClientStream{stream}
+ return x, nil
+}
+
+type balanceLoadClientStream struct {
+ ClientStream
+}
+
+func (x *balanceLoadClientStream) Send(m *lbpb.LoadBalanceRequest) error {
+ return x.ClientStream.SendMsg(m)
+}
+
+func (x *balanceLoadClientStream) Recv() (*lbpb.LoadBalanceResponse, error) {
+ m := new(lbpb.LoadBalanceResponse)
+ if err := x.ClientStream.RecvMsg(m); err != nil {
+ return nil, err
+ }
+ return m, nil
+}
+
+// NewGRPCLBBalancer creates a grpclb load balancer.
+func NewGRPCLBBalancer(r naming.Resolver) Balancer {
+ return &balancer{
+ r: r,
+ }
+}
+
+type remoteBalancerInfo struct {
+ addr string
+ // the server name used for authentication with the remote LB server.
+ name string
+}
+
+// grpclbAddrInfo consists of the information of a backend server.
+type grpclbAddrInfo struct {
+ addr Address
+ connected bool
+ // dropForRateLimiting indicates whether this particular request should be
+ // dropped by the client for rate limiting.
+ dropForRateLimiting bool
+ // dropForLoadBalancing indicates whether this particular request should be
+ // dropped by the client for load balancing.
+ dropForLoadBalancing bool
+}
+
+type balancer struct {
+ r naming.Resolver
+ target string
+ mu sync.Mutex
+ seq int // a sequence number to make sure addrCh does not get stale addresses.
+ w naming.Watcher
+ addrCh chan []Address
+ rbs []remoteBalancerInfo
+ addrs []*grpclbAddrInfo
+ next int
+ waitCh chan struct{}
+ done bool
+ expTimer *time.Timer
+ rand *rand.Rand
+
+ clientStats lbpb.ClientStats
+}
+
+func (b *balancer) watchAddrUpdates(w naming.Watcher, ch chan []remoteBalancerInfo) error {
+ updates, err := w.Next()
+ if err != nil {
+ grpclog.Warningf("grpclb: failed to get next addr update from watcher: %v", err)
+ return err
+ }
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done {
+ return ErrClientConnClosing
+ }
+ for _, update := range updates {
+ switch update.Op {
+ case naming.Add:
+ var exist bool
+ for _, v := range b.rbs {
+ // TODO: Is the same addr with different server name a different balancer?
+ if update.Addr == v.addr {
+ exist = true
+ break
+ }
+ }
+ if exist {
+ continue
+ }
+ md, ok := update.Metadata.(*naming.AddrMetadataGRPCLB)
+ if !ok {
+ // TODO: Revisit the handling here and may introduce some fallback mechanism.
+ grpclog.Errorf("The name resolution contains unexpected metadata %v", update.Metadata)
+ continue
+ }
+ switch md.AddrType {
+ case naming.Backend:
+ // TODO: Revisit the handling here and may introduce some fallback mechanism.
+ grpclog.Errorf("The name resolution does not give grpclb addresses")
+ continue
+ case naming.GRPCLB:
+ b.rbs = append(b.rbs, remoteBalancerInfo{
+ addr: update.Addr,
+ name: md.ServerName,
+ })
+ default:
+ grpclog.Errorf("Received unknow address type %d", md.AddrType)
+ continue
+ }
+ case naming.Delete:
+ for i, v := range b.rbs {
+ if update.Addr == v.addr {
+ copy(b.rbs[i:], b.rbs[i+1:])
+ b.rbs = b.rbs[:len(b.rbs)-1]
+ break
+ }
+ }
+ default:
+ grpclog.Errorf("Unknown update.Op %v", update.Op)
+ }
+ }
+ // TODO: Fall back to the basic round-robin load balancing if the resulting address is
+ // not a load balancer.
+ select {
+ case <-ch:
+ default:
+ }
+ ch <- b.rbs
+ return nil
+}
+
+func (b *balancer) serverListExpire(seq int) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ // TODO: gRPC interanls do not clear the connections when the server list is stale.
+ // This means RPCs will keep using the existing server list until b receives new
+ // server list even though the list is expired. Revisit this behavior later.
+ if b.done || seq < b.seq {
+ return
+ }
+ b.next = 0
+ b.addrs = nil
+ // Ask grpc internals to close all the corresponding connections.
+ b.addrCh <- nil
+}
+
+func convertDuration(d *lbpb.Duration) time.Duration {
+ if d == nil {
+ return 0
+ }
+ return time.Duration(d.Seconds)*time.Second + time.Duration(d.Nanos)*time.Nanosecond
+}
+
+func (b *balancer) processServerList(l *lbpb.ServerList, seq int) {
+ if l == nil {
+ return
+ }
+ servers := l.GetServers()
+ expiration := convertDuration(l.GetExpirationInterval())
+ var (
+ sl []*grpclbAddrInfo
+ addrs []Address
+ )
+ for _, s := range servers {
+ md := metadata.Pairs("lb-token", s.LoadBalanceToken)
+ ip := net.IP(s.IpAddress)
+ ipStr := ip.String()
+ if ip.To4() == nil {
+ // Add square brackets to ipv6 addresses, otherwise net.Dial() and
+ // net.SplitHostPort() will return too many colons error.
+ ipStr = fmt.Sprintf("[%s]", ipStr)
+ }
+ addr := Address{
+ Addr: fmt.Sprintf("%s:%d", ipStr, s.Port),
+ Metadata: &md,
+ }
+ sl = append(sl, &grpclbAddrInfo{
+ addr: addr,
+ dropForRateLimiting: s.DropForRateLimiting,
+ dropForLoadBalancing: s.DropForLoadBalancing,
+ })
+ addrs = append(addrs, addr)
+ }
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done || seq < b.seq {
+ return
+ }
+ if len(sl) > 0 {
+ // reset b.next to 0 when replacing the server list.
+ b.next = 0
+ b.addrs = sl
+ b.addrCh <- addrs
+ if b.expTimer != nil {
+ b.expTimer.Stop()
+ b.expTimer = nil
+ }
+ if expiration > 0 {
+ b.expTimer = time.AfterFunc(expiration, func() {
+ b.serverListExpire(seq)
+ })
+ }
+ }
+ return
+}
+
+func (b *balancer) sendLoadReport(s *balanceLoadClientStream, interval time.Duration, done <-chan struct{}) {
+ ticker := time.NewTicker(interval)
+ defer ticker.Stop()
+ for {
+ select {
+ case <-ticker.C:
+ case <-done:
+ return
+ }
+ b.mu.Lock()
+ stats := b.clientStats
+ b.clientStats = lbpb.ClientStats{} // Clear the stats.
+ b.mu.Unlock()
+ t := time.Now()
+ stats.Timestamp = &lbpb.Timestamp{
+ Seconds: t.Unix(),
+ Nanos: int32(t.Nanosecond()),
+ }
+ if err := s.Send(&lbpb.LoadBalanceRequest{
+ LoadBalanceRequestType: &lbpb.LoadBalanceRequest_ClientStats{
+ ClientStats: &stats,
+ },
+ }); err != nil {
+ grpclog.Errorf("grpclb: failed to send load report: %v", err)
+ return
+ }
+ }
+}
+
+func (b *balancer) callRemoteBalancer(lbc *loadBalancerClient, seq int) (retry bool) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ stream, err := lbc.BalanceLoad(ctx)
+ if err != nil {
+ grpclog.Errorf("grpclb: failed to perform RPC to the remote balancer %v", err)
+ return
+ }
+ b.mu.Lock()
+ if b.done {
+ b.mu.Unlock()
+ return
+ }
+ b.mu.Unlock()
+ initReq := &lbpb.LoadBalanceRequest{
+ LoadBalanceRequestType: &lbpb.LoadBalanceRequest_InitialRequest{
+ InitialRequest: &lbpb.InitialLoadBalanceRequest{
+ Name: b.target,
+ },
+ },
+ }
+ if err := stream.Send(initReq); err != nil {
+ grpclog.Errorf("grpclb: failed to send init request: %v", err)
+ // TODO: backoff on retry?
+ return true
+ }
+ reply, err := stream.Recv()
+ if err != nil {
+ grpclog.Errorf("grpclb: failed to recv init response: %v", err)
+ // TODO: backoff on retry?
+ return true
+ }
+ initResp := reply.GetInitialResponse()
+ if initResp == nil {
+ grpclog.Errorf("grpclb: reply from remote balancer did not include initial response.")
+ return
+ }
+ // TODO: Support delegation.
+ if initResp.LoadBalancerDelegate != "" {
+ // delegation
+ grpclog.Errorf("TODO: Delegation is not supported yet.")
+ return
+ }
+ streamDone := make(chan struct{})
+ defer close(streamDone)
+ b.mu.Lock()
+ b.clientStats = lbpb.ClientStats{} // Clear client stats.
+ b.mu.Unlock()
+ if d := convertDuration(initResp.ClientStatsReportInterval); d > 0 {
+ go b.sendLoadReport(stream, d, streamDone)
+ }
+ // Retrieve the server list.
+ for {
+ reply, err := stream.Recv()
+ if err != nil {
+ grpclog.Errorf("grpclb: failed to recv server list: %v", err)
+ break
+ }
+ b.mu.Lock()
+ if b.done || seq < b.seq {
+ b.mu.Unlock()
+ return
+ }
+ b.seq++ // tick when receiving a new list of servers.
+ seq = b.seq
+ b.mu.Unlock()
+ if serverList := reply.GetServerList(); serverList != nil {
+ b.processServerList(serverList, seq)
+ }
+ }
+ return true
+}
+
+func (b *balancer) Start(target string, config BalancerConfig) error {
+ b.rand = rand.New(rand.NewSource(time.Now().Unix()))
+ // TODO: Fall back to the basic direct connection if there is no name resolver.
+ if b.r == nil {
+ return errors.New("there is no name resolver installed")
+ }
+ b.target = target
+ b.mu.Lock()
+ if b.done {
+ b.mu.Unlock()
+ return ErrClientConnClosing
+ }
+ b.addrCh = make(chan []Address)
+ w, err := b.r.Resolve(target)
+ if err != nil {
+ b.mu.Unlock()
+ grpclog.Errorf("grpclb: failed to resolve address: %v, err: %v", target, err)
+ return err
+ }
+ b.w = w
+ b.mu.Unlock()
+ balancerAddrsCh := make(chan []remoteBalancerInfo, 1)
+ // Spawn a goroutine to monitor the name resolution of remote load balancer.
+ go func() {
+ for {
+ if err := b.watchAddrUpdates(w, balancerAddrsCh); err != nil {
+ grpclog.Warningf("grpclb: the naming watcher stops working due to %v.\n", err)
+ close(balancerAddrsCh)
+ return
+ }
+ }
+ }()
+ // Spawn a goroutine to talk to the remote load balancer.
+ go func() {
+ var (
+ cc *ClientConn
+ // ccError is closed when there is an error in the current cc.
+ // A new rb should be picked from rbs and connected.
+ ccError chan struct{}
+ rb *remoteBalancerInfo
+ rbs []remoteBalancerInfo
+ rbIdx int
+ )
+
+ defer func() {
+ if ccError != nil {
+ select {
+ case <-ccError:
+ default:
+ close(ccError)
+ }
+ }
+ if cc != nil {
+ cc.Close()
+ }
+ }()
+
+ for {
+ var ok bool
+ select {
+ case rbs, ok = <-balancerAddrsCh:
+ if !ok {
+ return
+ }
+ foundIdx := -1
+ if rb != nil {
+ for i, trb := range rbs {
+ if trb == *rb {
+ foundIdx = i
+ break
+ }
+ }
+ }
+ if foundIdx >= 0 {
+ if foundIdx >= 1 {
+ // Move the address in use to the beginning of the list.
+ b.rbs[0], b.rbs[foundIdx] = b.rbs[foundIdx], b.rbs[0]
+ rbIdx = 0
+ }
+ continue // If found, don't dial new cc.
+ } else if len(rbs) > 0 {
+ // Pick a random one from the list, instead of always using the first one.
+ if l := len(rbs); l > 1 && rb != nil {
+ tmpIdx := b.rand.Intn(l - 1)
+ b.rbs[0], b.rbs[tmpIdx] = b.rbs[tmpIdx], b.rbs[0]
+ }
+ rbIdx = 0
+ rb = &rbs[0]
+ } else {
+ // foundIdx < 0 && len(rbs) <= 0.
+ rb = nil
+ }
+ case <-ccError:
+ ccError = nil
+ if rbIdx < len(rbs)-1 {
+ rbIdx++
+ rb = &rbs[rbIdx]
+ } else {
+ rb = nil
+ }
+ }
+
+ if rb == nil {
+ continue
+ }
+
+ if cc != nil {
+ cc.Close()
+ }
+ // Talk to the remote load balancer to get the server list.
+ var (
+ err error
+ dopts []DialOption
+ )
+ if creds := config.DialCreds; creds != nil {
+ if rb.name != "" {
+ if err := creds.OverrideServerName(rb.name); err != nil {
+ grpclog.Warningf("grpclb: failed to override the server name in the credentials: %v", err)
+ continue
+ }
+ }
+ dopts = append(dopts, WithTransportCredentials(creds))
+ } else {
+ dopts = append(dopts, WithInsecure())
+ }
+ if dialer := config.Dialer; dialer != nil {
+ // WithDialer takes a different type of function, so we instead use a special DialOption here.
+ dopts = append(dopts, func(o *dialOptions) { o.copts.Dialer = dialer })
+ }
+ ccError = make(chan struct{})
+ cc, err = Dial(rb.addr, dopts...)
+ if err != nil {
+ grpclog.Warningf("grpclb: failed to setup a connection to the remote balancer %v: %v", rb.addr, err)
+ close(ccError)
+ continue
+ }
+ b.mu.Lock()
+ b.seq++ // tick when getting a new balancer address
+ seq := b.seq
+ b.next = 0
+ b.mu.Unlock()
+ go func(cc *ClientConn, ccError chan struct{}) {
+ lbc := &loadBalancerClient{cc}
+ b.callRemoteBalancer(lbc, seq)
+ cc.Close()
+ select {
+ case <-ccError:
+ default:
+ close(ccError)
+ }
+ }(cc, ccError)
+ }
+ }()
+ return nil
+}
+
+func (b *balancer) down(addr Address, err error) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ for _, a := range b.addrs {
+ if addr == a.addr {
+ a.connected = false
+ break
+ }
+ }
+}
+
+func (b *balancer) Up(addr Address) func(error) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done {
+ return nil
+ }
+ var cnt int
+ for _, a := range b.addrs {
+ if a.addr == addr {
+ if a.connected {
+ return nil
+ }
+ a.connected = true
+ }
+ if a.connected && !a.dropForRateLimiting && !a.dropForLoadBalancing {
+ cnt++
+ }
+ }
+ // addr is the only one which is connected. Notify the Get() callers who are blocking.
+ if cnt == 1 && b.waitCh != nil {
+ close(b.waitCh)
+ b.waitCh = nil
+ }
+ return func(err error) {
+ b.down(addr, err)
+ }
+}
+
+func (b *balancer) Get(ctx context.Context, opts BalancerGetOptions) (addr Address, put func(), err error) {
+ var ch chan struct{}
+ b.mu.Lock()
+ if b.done {
+ b.mu.Unlock()
+ err = ErrClientConnClosing
+ return
+ }
+ seq := b.seq
+
+ defer func() {
+ if err != nil {
+ return
+ }
+ put = func() {
+ s, ok := rpcInfoFromContext(ctx)
+ if !ok {
+ return
+ }
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done || seq < b.seq {
+ return
+ }
+ b.clientStats.NumCallsFinished++
+ if !s.bytesSent {
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ } else if s.bytesReceived {
+ b.clientStats.NumCallsFinishedKnownReceived++
+ }
+ }
+ }()
+
+ b.clientStats.NumCallsStarted++
+ if len(b.addrs) > 0 {
+ if b.next >= len(b.addrs) {
+ b.next = 0
+ }
+ next := b.next
+ for {
+ a := b.addrs[next]
+ next = (next + 1) % len(b.addrs)
+ if a.connected {
+ if !a.dropForRateLimiting && !a.dropForLoadBalancing {
+ addr = a.addr
+ b.next = next
+ b.mu.Unlock()
+ return
+ }
+ if !opts.BlockingWait {
+ b.next = next
+ if a.dropForLoadBalancing {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
+ } else if a.dropForRateLimiting {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForRateLimiting++
+ }
+ b.mu.Unlock()
+ err = Errorf(codes.Unavailable, "%s drops requests", a.addr.Addr)
+ return
+ }
+ }
+ if next == b.next {
+ // Has iterated all the possible address but none is connected.
+ break
+ }
+ }
+ }
+ if !opts.BlockingWait {
+ if len(b.addrs) == 0 {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ b.mu.Unlock()
+ err = Errorf(codes.Unavailable, "there is no address available")
+ return
+ }
+ // Returns the next addr on b.addrs for a failfast RPC.
+ addr = b.addrs[b.next].addr
+ b.next++
+ b.mu.Unlock()
+ return
+ }
+ // Wait on b.waitCh for non-failfast RPCs.
+ if b.waitCh == nil {
+ ch = make(chan struct{})
+ b.waitCh = ch
+ } else {
+ ch = b.waitCh
+ }
+ b.mu.Unlock()
+ for {
+ select {
+ case <-ctx.Done():
+ b.mu.Lock()
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ b.mu.Unlock()
+ err = ctx.Err()
+ return
+ case <-ch:
+ b.mu.Lock()
+ if b.done {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithClientFailedToSend++
+ b.mu.Unlock()
+ err = ErrClientConnClosing
+ return
+ }
+
+ if len(b.addrs) > 0 {
+ if b.next >= len(b.addrs) {
+ b.next = 0
+ }
+ next := b.next
+ for {
+ a := b.addrs[next]
+ next = (next + 1) % len(b.addrs)
+ if a.connected {
+ if !a.dropForRateLimiting && !a.dropForLoadBalancing {
+ addr = a.addr
+ b.next = next
+ b.mu.Unlock()
+ return
+ }
+ if !opts.BlockingWait {
+ b.next = next
+ if a.dropForLoadBalancing {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForLoadBalancing++
+ } else if a.dropForRateLimiting {
+ b.clientStats.NumCallsFinished++
+ b.clientStats.NumCallsFinishedWithDropForRateLimiting++
+ }
+ b.mu.Unlock()
+ err = Errorf(codes.Unavailable, "drop requests for the addreess %s", a.addr.Addr)
+ return
+ }
+ }
+ if next == b.next {
+ // Has iterated all the possible address but none is connected.
+ break
+ }
+ }
+ }
+ // The newly added addr got removed by Down() again.
+ if b.waitCh == nil {
+ ch = make(chan struct{})
+ b.waitCh = ch
+ } else {
+ ch = b.waitCh
+ }
+ b.mu.Unlock()
+ }
+ }
+}
+
+func (b *balancer) Notify() <-chan []Address {
+ return b.addrCh
+}
+
+func (b *balancer) Close() error {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.done {
+ return errBalancerClosed
+ }
+ b.done = true
+ if b.expTimer != nil {
+ b.expTimer.Stop()
+ }
+ if b.waitCh != nil {
+ close(b.waitCh)
+ }
+ if b.addrCh != nil {
+ close(b.addrCh)
+ }
+ if b.w != nil {
+ b.w.Close()
+ }
+ return nil
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
new file mode 100644
index 00000000..f63941bd
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.pb.go
@@ -0,0 +1,629 @@
+// Code generated by protoc-gen-go.
+// source: grpclb.proto
+// DO NOT EDIT!
+
+/*
+Package grpc_lb_v1 is a generated protocol buffer package.
+
+It is generated from these files:
+ grpclb.proto
+
+It has these top-level messages:
+ Duration
+ Timestamp
+ LoadBalanceRequest
+ InitialLoadBalanceRequest
+ ClientStats
+ LoadBalanceResponse
+ InitialLoadBalanceResponse
+ ServerList
+ Server
+*/
+package grpc_lb_v1
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+type Duration struct {
+ // Signed seconds of the span of time. Must be from -315,576,000,000
+ // to +315,576,000,000 inclusive.
+ Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
+ // Signed fractions of a second at nanosecond resolution of the span
+ // of time. Durations less than one second are represented with a 0
+ // `seconds` field and a positive or negative `nanos` field. For durations
+ // of one second or more, a non-zero value for the `nanos` field must be
+ // of the same sign as the `seconds` field. Must be from -999,999,999
+ // to +999,999,999 inclusive.
+ Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
+}
+
+func (m *Duration) Reset() { *m = Duration{} }
+func (m *Duration) String() string { return proto.CompactTextString(m) }
+func (*Duration) ProtoMessage() {}
+func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+func (m *Duration) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Duration) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
+type Timestamp struct {
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
+}
+
+func (m *Timestamp) Reset() { *m = Timestamp{} }
+func (m *Timestamp) String() string { return proto.CompactTextString(m) }
+func (*Timestamp) ProtoMessage() {}
+func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+func (m *Timestamp) GetSeconds() int64 {
+ if m != nil {
+ return m.Seconds
+ }
+ return 0
+}
+
+func (m *Timestamp) GetNanos() int32 {
+ if m != nil {
+ return m.Nanos
+ }
+ return 0
+}
+
+type LoadBalanceRequest struct {
+ // Types that are valid to be assigned to LoadBalanceRequestType:
+ // *LoadBalanceRequest_InitialRequest
+ // *LoadBalanceRequest_ClientStats
+ LoadBalanceRequestType isLoadBalanceRequest_LoadBalanceRequestType `protobuf_oneof:"load_balance_request_type"`
+}
+
+func (m *LoadBalanceRequest) Reset() { *m = LoadBalanceRequest{} }
+func (m *LoadBalanceRequest) String() string { return proto.CompactTextString(m) }
+func (*LoadBalanceRequest) ProtoMessage() {}
+func (*LoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+
+type isLoadBalanceRequest_LoadBalanceRequestType interface {
+ isLoadBalanceRequest_LoadBalanceRequestType()
+}
+
+type LoadBalanceRequest_InitialRequest struct {
+ InitialRequest *InitialLoadBalanceRequest `protobuf:"bytes,1,opt,name=initial_request,json=initialRequest,oneof"`
+}
+type LoadBalanceRequest_ClientStats struct {
+ ClientStats *ClientStats `protobuf:"bytes,2,opt,name=client_stats,json=clientStats,oneof"`
+}
+
+func (*LoadBalanceRequest_InitialRequest) isLoadBalanceRequest_LoadBalanceRequestType() {}
+func (*LoadBalanceRequest_ClientStats) isLoadBalanceRequest_LoadBalanceRequestType() {}
+
+func (m *LoadBalanceRequest) GetLoadBalanceRequestType() isLoadBalanceRequest_LoadBalanceRequestType {
+ if m != nil {
+ return m.LoadBalanceRequestType
+ }
+ return nil
+}
+
+func (m *LoadBalanceRequest) GetInitialRequest() *InitialLoadBalanceRequest {
+ if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_InitialRequest); ok {
+ return x.InitialRequest
+ }
+ return nil
+}
+
+func (m *LoadBalanceRequest) GetClientStats() *ClientStats {
+ if x, ok := m.GetLoadBalanceRequestType().(*LoadBalanceRequest_ClientStats); ok {
+ return x.ClientStats
+ }
+ return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*LoadBalanceRequest) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+ return _LoadBalanceRequest_OneofMarshaler, _LoadBalanceRequest_OneofUnmarshaler, _LoadBalanceRequest_OneofSizer, []interface{}{
+ (*LoadBalanceRequest_InitialRequest)(nil),
+ (*LoadBalanceRequest_ClientStats)(nil),
+ }
+}
+
+func _LoadBalanceRequest_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+ m := msg.(*LoadBalanceRequest)
+ // load_balance_request_type
+ switch x := m.LoadBalanceRequestType.(type) {
+ case *LoadBalanceRequest_InitialRequest:
+ b.EncodeVarint(1<<3 | proto.WireBytes)
+ if err := b.EncodeMessage(x.InitialRequest); err != nil {
+ return err
+ }
+ case *LoadBalanceRequest_ClientStats:
+ b.EncodeVarint(2<<3 | proto.WireBytes)
+ if err := b.EncodeMessage(x.ClientStats); err != nil {
+ return err
+ }
+ case nil:
+ default:
+ return fmt.Errorf("LoadBalanceRequest.LoadBalanceRequestType has unexpected type %T", x)
+ }
+ return nil
+}
+
+func _LoadBalanceRequest_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+ m := msg.(*LoadBalanceRequest)
+ switch tag {
+ case 1: // load_balance_request_type.initial_request
+ if wire != proto.WireBytes {
+ return true, proto.ErrInternalBadWireType
+ }
+ msg := new(InitialLoadBalanceRequest)
+ err := b.DecodeMessage(msg)
+ m.LoadBalanceRequestType = &LoadBalanceRequest_InitialRequest{msg}
+ return true, err
+ case 2: // load_balance_request_type.client_stats
+ if wire != proto.WireBytes {
+ return true, proto.ErrInternalBadWireType
+ }
+ msg := new(ClientStats)
+ err := b.DecodeMessage(msg)
+ m.LoadBalanceRequestType = &LoadBalanceRequest_ClientStats{msg}
+ return true, err
+ default:
+ return false, nil
+ }
+}
+
+func _LoadBalanceRequest_OneofSizer(msg proto.Message) (n int) {
+ m := msg.(*LoadBalanceRequest)
+ // load_balance_request_type
+ switch x := m.LoadBalanceRequestType.(type) {
+ case *LoadBalanceRequest_InitialRequest:
+ s := proto.Size(x.InitialRequest)
+ n += proto.SizeVarint(1<<3 | proto.WireBytes)
+ n += proto.SizeVarint(uint64(s))
+ n += s
+ case *LoadBalanceRequest_ClientStats:
+ s := proto.Size(x.ClientStats)
+ n += proto.SizeVarint(2<<3 | proto.WireBytes)
+ n += proto.SizeVarint(uint64(s))
+ n += s
+ case nil:
+ default:
+ panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+ }
+ return n
+}
+
+type InitialLoadBalanceRequest struct {
+ // Name of load balanced service (IE, balancer.service.com)
+ // length should be less than 256 bytes.
+ Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+}
+
+func (m *InitialLoadBalanceRequest) Reset() { *m = InitialLoadBalanceRequest{} }
+func (m *InitialLoadBalanceRequest) String() string { return proto.CompactTextString(m) }
+func (*InitialLoadBalanceRequest) ProtoMessage() {}
+func (*InitialLoadBalanceRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
+
+func (m *InitialLoadBalanceRequest) GetName() string {
+ if m != nil {
+ return m.Name
+ }
+ return ""
+}
+
+// Contains client level statistics that are useful to load balancing. Each
+// count except the timestamp should be reset to zero after reporting the stats.
+type ClientStats struct {
+ // The timestamp of generating the report.
+ Timestamp *Timestamp `protobuf:"bytes,1,opt,name=timestamp" json:"timestamp,omitempty"`
+ // The total number of RPCs that started.
+ NumCallsStarted int64 `protobuf:"varint,2,opt,name=num_calls_started,json=numCallsStarted" json:"num_calls_started,omitempty"`
+ // The total number of RPCs that finished.
+ NumCallsFinished int64 `protobuf:"varint,3,opt,name=num_calls_finished,json=numCallsFinished" json:"num_calls_finished,omitempty"`
+ // The total number of RPCs that were dropped by the client because of rate
+ // limiting.
+ NumCallsFinishedWithDropForRateLimiting int64 `protobuf:"varint,4,opt,name=num_calls_finished_with_drop_for_rate_limiting,json=numCallsFinishedWithDropForRateLimiting" json:"num_calls_finished_with_drop_for_rate_limiting,omitempty"`
+ // The total number of RPCs that were dropped by the client because of load
+ // balancing.
+ NumCallsFinishedWithDropForLoadBalancing int64 `protobuf:"varint,5,opt,name=num_calls_finished_with_drop_for_load_balancing,json=numCallsFinishedWithDropForLoadBalancing" json:"num_calls_finished_with_drop_for_load_balancing,omitempty"`
+ // The total number of RPCs that failed to reach a server except dropped RPCs.
+ NumCallsFinishedWithClientFailedToSend int64 `protobuf:"varint,6,opt,name=num_calls_finished_with_client_failed_to_send,json=numCallsFinishedWithClientFailedToSend" json:"num_calls_finished_with_client_failed_to_send,omitempty"`
+ // The total number of RPCs that finished and are known to have been received
+ // by a server.
+ NumCallsFinishedKnownReceived int64 `protobuf:"varint,7,opt,name=num_calls_finished_known_received,json=numCallsFinishedKnownReceived" json:"num_calls_finished_known_received,omitempty"`
+}
+
+func (m *ClientStats) Reset() { *m = ClientStats{} }
+func (m *ClientStats) String() string { return proto.CompactTextString(m) }
+func (*ClientStats) ProtoMessage() {}
+func (*ClientStats) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
+
+func (m *ClientStats) GetTimestamp() *Timestamp {
+ if m != nil {
+ return m.Timestamp
+ }
+ return nil
+}
+
+func (m *ClientStats) GetNumCallsStarted() int64 {
+ if m != nil {
+ return m.NumCallsStarted
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinished() int64 {
+ if m != nil {
+ return m.NumCallsFinished
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithDropForRateLimiting() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithDropForRateLimiting
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithDropForLoadBalancing() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithDropForLoadBalancing
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedWithClientFailedToSend() int64 {
+ if m != nil {
+ return m.NumCallsFinishedWithClientFailedToSend
+ }
+ return 0
+}
+
+func (m *ClientStats) GetNumCallsFinishedKnownReceived() int64 {
+ if m != nil {
+ return m.NumCallsFinishedKnownReceived
+ }
+ return 0
+}
+
+type LoadBalanceResponse struct {
+ // Types that are valid to be assigned to LoadBalanceResponseType:
+ // *LoadBalanceResponse_InitialResponse
+ // *LoadBalanceResponse_ServerList
+ LoadBalanceResponseType isLoadBalanceResponse_LoadBalanceResponseType `protobuf_oneof:"load_balance_response_type"`
+}
+
+func (m *LoadBalanceResponse) Reset() { *m = LoadBalanceResponse{} }
+func (m *LoadBalanceResponse) String() string { return proto.CompactTextString(m) }
+func (*LoadBalanceResponse) ProtoMessage() {}
+func (*LoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+
+type isLoadBalanceResponse_LoadBalanceResponseType interface {
+ isLoadBalanceResponse_LoadBalanceResponseType()
+}
+
+type LoadBalanceResponse_InitialResponse struct {
+ InitialResponse *InitialLoadBalanceResponse `protobuf:"bytes,1,opt,name=initial_response,json=initialResponse,oneof"`
+}
+type LoadBalanceResponse_ServerList struct {
+ ServerList *ServerList `protobuf:"bytes,2,opt,name=server_list,json=serverList,oneof"`
+}
+
+func (*LoadBalanceResponse_InitialResponse) isLoadBalanceResponse_LoadBalanceResponseType() {}
+func (*LoadBalanceResponse_ServerList) isLoadBalanceResponse_LoadBalanceResponseType() {}
+
+func (m *LoadBalanceResponse) GetLoadBalanceResponseType() isLoadBalanceResponse_LoadBalanceResponseType {
+ if m != nil {
+ return m.LoadBalanceResponseType
+ }
+ return nil
+}
+
+func (m *LoadBalanceResponse) GetInitialResponse() *InitialLoadBalanceResponse {
+ if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_InitialResponse); ok {
+ return x.InitialResponse
+ }
+ return nil
+}
+
+func (m *LoadBalanceResponse) GetServerList() *ServerList {
+ if x, ok := m.GetLoadBalanceResponseType().(*LoadBalanceResponse_ServerList); ok {
+ return x.ServerList
+ }
+ return nil
+}
+
+// XXX_OneofFuncs is for the internal use of the proto package.
+func (*LoadBalanceResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
+ return _LoadBalanceResponse_OneofMarshaler, _LoadBalanceResponse_OneofUnmarshaler, _LoadBalanceResponse_OneofSizer, []interface{}{
+ (*LoadBalanceResponse_InitialResponse)(nil),
+ (*LoadBalanceResponse_ServerList)(nil),
+ }
+}
+
+func _LoadBalanceResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
+ m := msg.(*LoadBalanceResponse)
+ // load_balance_response_type
+ switch x := m.LoadBalanceResponseType.(type) {
+ case *LoadBalanceResponse_InitialResponse:
+ b.EncodeVarint(1<<3 | proto.WireBytes)
+ if err := b.EncodeMessage(x.InitialResponse); err != nil {
+ return err
+ }
+ case *LoadBalanceResponse_ServerList:
+ b.EncodeVarint(2<<3 | proto.WireBytes)
+ if err := b.EncodeMessage(x.ServerList); err != nil {
+ return err
+ }
+ case nil:
+ default:
+ return fmt.Errorf("LoadBalanceResponse.LoadBalanceResponseType has unexpected type %T", x)
+ }
+ return nil
+}
+
+func _LoadBalanceResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
+ m := msg.(*LoadBalanceResponse)
+ switch tag {
+ case 1: // load_balance_response_type.initial_response
+ if wire != proto.WireBytes {
+ return true, proto.ErrInternalBadWireType
+ }
+ msg := new(InitialLoadBalanceResponse)
+ err := b.DecodeMessage(msg)
+ m.LoadBalanceResponseType = &LoadBalanceResponse_InitialResponse{msg}
+ return true, err
+ case 2: // load_balance_response_type.server_list
+ if wire != proto.WireBytes {
+ return true, proto.ErrInternalBadWireType
+ }
+ msg := new(ServerList)
+ err := b.DecodeMessage(msg)
+ m.LoadBalanceResponseType = &LoadBalanceResponse_ServerList{msg}
+ return true, err
+ default:
+ return false, nil
+ }
+}
+
+func _LoadBalanceResponse_OneofSizer(msg proto.Message) (n int) {
+ m := msg.(*LoadBalanceResponse)
+ // load_balance_response_type
+ switch x := m.LoadBalanceResponseType.(type) {
+ case *LoadBalanceResponse_InitialResponse:
+ s := proto.Size(x.InitialResponse)
+ n += proto.SizeVarint(1<<3 | proto.WireBytes)
+ n += proto.SizeVarint(uint64(s))
+ n += s
+ case *LoadBalanceResponse_ServerList:
+ s := proto.Size(x.ServerList)
+ n += proto.SizeVarint(2<<3 | proto.WireBytes)
+ n += proto.SizeVarint(uint64(s))
+ n += s
+ case nil:
+ default:
+ panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
+ }
+ return n
+}
+
+type InitialLoadBalanceResponse struct {
+ // This is an application layer redirect that indicates the client should use
+ // the specified server for load balancing. When this field is non-empty in
+ // the response, the client should open a separate connection to the
+ // load_balancer_delegate and call the BalanceLoad method. Its length should
+ // be less than 64 bytes.
+ LoadBalancerDelegate string `protobuf:"bytes,1,opt,name=load_balancer_delegate,json=loadBalancerDelegate" json:"load_balancer_delegate,omitempty"`
+ // This interval defines how often the client should send the client stats
+ // to the load balancer. Stats should only be reported when the duration is
+ // positive.
+ ClientStatsReportInterval *Duration `protobuf:"bytes,2,opt,name=client_stats_report_interval,json=clientStatsReportInterval" json:"client_stats_report_interval,omitempty"`
+}
+
+func (m *InitialLoadBalanceResponse) Reset() { *m = InitialLoadBalanceResponse{} }
+func (m *InitialLoadBalanceResponse) String() string { return proto.CompactTextString(m) }
+func (*InitialLoadBalanceResponse) ProtoMessage() {}
+func (*InitialLoadBalanceResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+
+func (m *InitialLoadBalanceResponse) GetLoadBalancerDelegate() string {
+ if m != nil {
+ return m.LoadBalancerDelegate
+ }
+ return ""
+}
+
+func (m *InitialLoadBalanceResponse) GetClientStatsReportInterval() *Duration {
+ if m != nil {
+ return m.ClientStatsReportInterval
+ }
+ return nil
+}
+
+type ServerList struct {
+ // Contains a list of servers selected by the load balancer. The list will
+ // be updated when server resolutions change or as needed to balance load
+ // across more servers. The client should consume the server list in order
+ // unless instructed otherwise via the client_config.
+ Servers []*Server `protobuf:"bytes,1,rep,name=servers" json:"servers,omitempty"`
+ // Indicates the amount of time that the client should consider this server
+ // list as valid. It may be considered stale after waiting this interval of
+ // time after receiving the list. If the interval is not positive, the
+ // client can assume the list is valid until the next list is received.
+ ExpirationInterval *Duration `protobuf:"bytes,3,opt,name=expiration_interval,json=expirationInterval" json:"expiration_interval,omitempty"`
+}
+
+func (m *ServerList) Reset() { *m = ServerList{} }
+func (m *ServerList) String() string { return proto.CompactTextString(m) }
+func (*ServerList) ProtoMessage() {}
+func (*ServerList) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+
+func (m *ServerList) GetServers() []*Server {
+ if m != nil {
+ return m.Servers
+ }
+ return nil
+}
+
+func (m *ServerList) GetExpirationInterval() *Duration {
+ if m != nil {
+ return m.ExpirationInterval
+ }
+ return nil
+}
+
+// Contains server information. When none of the [drop_for_*] fields are true,
+// use the other fields. When drop_for_rate_limiting is true, ignore all other
+// fields. Use drop_for_load_balancing only when it is true and
+// drop_for_rate_limiting is false.
+type Server struct {
+ // A resolved address for the server, serialized in network-byte-order. It may
+ // either be an IPv4 or IPv6 address.
+ IpAddress []byte `protobuf:"bytes,1,opt,name=ip_address,json=ipAddress,proto3" json:"ip_address,omitempty"`
+ // A resolved port number for the server.
+ Port int32 `protobuf:"varint,2,opt,name=port" json:"port,omitempty"`
+ // An opaque but printable token given to the frontend for each pick. All
+ // frontend requests for that pick must include the token in its initial
+ // metadata. The token is used by the backend to verify the request and to
+ // allow the backend to report load to the gRPC LB system.
+ //
+ // Its length is variable but less than 50 bytes.
+ LoadBalanceToken string `protobuf:"bytes,3,opt,name=load_balance_token,json=loadBalanceToken" json:"load_balance_token,omitempty"`
+ // Indicates whether this particular request should be dropped by the client
+ // for rate limiting.
+ DropForRateLimiting bool `protobuf:"varint,4,opt,name=drop_for_rate_limiting,json=dropForRateLimiting" json:"drop_for_rate_limiting,omitempty"`
+ // Indicates whether this particular request should be dropped by the client
+ // for load balancing.
+ DropForLoadBalancing bool `protobuf:"varint,5,opt,name=drop_for_load_balancing,json=dropForLoadBalancing" json:"drop_for_load_balancing,omitempty"`
+}
+
+func (m *Server) Reset() { *m = Server{} }
+func (m *Server) String() string { return proto.CompactTextString(m) }
+func (*Server) ProtoMessage() {}
+func (*Server) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
+
+func (m *Server) GetIpAddress() []byte {
+ if m != nil {
+ return m.IpAddress
+ }
+ return nil
+}
+
+func (m *Server) GetPort() int32 {
+ if m != nil {
+ return m.Port
+ }
+ return 0
+}
+
+func (m *Server) GetLoadBalanceToken() string {
+ if m != nil {
+ return m.LoadBalanceToken
+ }
+ return ""
+}
+
+func (m *Server) GetDropForRateLimiting() bool {
+ if m != nil {
+ return m.DropForRateLimiting
+ }
+ return false
+}
+
+func (m *Server) GetDropForLoadBalancing() bool {
+ if m != nil {
+ return m.DropForLoadBalancing
+ }
+ return false
+}
+
+func init() {
+ proto.RegisterType((*Duration)(nil), "grpc.lb.v1.Duration")
+ proto.RegisterType((*Timestamp)(nil), "grpc.lb.v1.Timestamp")
+ proto.RegisterType((*LoadBalanceRequest)(nil), "grpc.lb.v1.LoadBalanceRequest")
+ proto.RegisterType((*InitialLoadBalanceRequest)(nil), "grpc.lb.v1.InitialLoadBalanceRequest")
+ proto.RegisterType((*ClientStats)(nil), "grpc.lb.v1.ClientStats")
+ proto.RegisterType((*LoadBalanceResponse)(nil), "grpc.lb.v1.LoadBalanceResponse")
+ proto.RegisterType((*InitialLoadBalanceResponse)(nil), "grpc.lb.v1.InitialLoadBalanceResponse")
+ proto.RegisterType((*ServerList)(nil), "grpc.lb.v1.ServerList")
+ proto.RegisterType((*Server)(nil), "grpc.lb.v1.Server")
+}
+
+func init() { proto.RegisterFile("grpclb.proto", fileDescriptor0) }
+
+var fileDescriptor0 = []byte{
+ // 733 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x55, 0xdd, 0x4e, 0x1b, 0x39,
+ 0x14, 0x66, 0x36, 0xfc, 0xe5, 0x24, 0x5a, 0x58, 0x93, 0x85, 0xc0, 0xc2, 0x2e, 0x1b, 0xa9, 0x34,
+ 0xaa, 0x68, 0x68, 0x43, 0x7b, 0xd1, 0x9f, 0x9b, 0x02, 0x45, 0x41, 0xe5, 0xa2, 0x72, 0xa8, 0x7a,
+ 0x55, 0x59, 0x4e, 0xc6, 0x80, 0xc5, 0xc4, 0x9e, 0xda, 0x4e, 0x68, 0x2f, 0x7b, 0xd9, 0x47, 0xe9,
+ 0x63, 0x54, 0x7d, 0x86, 0xbe, 0x4f, 0x65, 0x7b, 0x26, 0x33, 0x90, 0x1f, 0xd4, 0xbb, 0xf1, 0xf1,
+ 0x77, 0xbe, 0xf3, 0xf9, 0xd8, 0xdf, 0x19, 0x28, 0x5f, 0xa8, 0xb8, 0x1b, 0x75, 0x1a, 0xb1, 0x92,
+ 0x46, 0x22, 0xb0, 0xab, 0x46, 0xd4, 0x69, 0x0c, 0x1e, 0xd7, 0x9e, 0xc3, 0xe2, 0x51, 0x5f, 0x51,
+ 0xc3, 0xa5, 0x40, 0x55, 0x58, 0xd0, 0xac, 0x2b, 0x45, 0xa8, 0xab, 0xc1, 0x76, 0x50, 0x2f, 0xe0,
+ 0x74, 0x89, 0x2a, 0x30, 0x27, 0xa8, 0x90, 0xba, 0xfa, 0xc7, 0x76, 0x50, 0x9f, 0xc3, 0x7e, 0x51,
+ 0x7b, 0x01, 0xc5, 0x33, 0xde, 0x63, 0xda, 0xd0, 0x5e, 0xfc, 0xdb, 0xc9, 0xdf, 0x03, 0x40, 0xa7,
+ 0x92, 0x86, 0x07, 0x34, 0xa2, 0xa2, 0xcb, 0x30, 0xfb, 0xd8, 0x67, 0xda, 0xa0, 0xb7, 0xb0, 0xc4,
+ 0x05, 0x37, 0x9c, 0x46, 0x44, 0xf9, 0x90, 0xa3, 0x2b, 0x35, 0xef, 0x35, 0x32, 0xd5, 0x8d, 0x13,
+ 0x0f, 0x19, 0xcd, 0x6f, 0xcd, 0xe0, 0x3f, 0x93, 0xfc, 0x94, 0xf1, 0x25, 0x94, 0xbb, 0x11, 0x67,
+ 0xc2, 0x10, 0x6d, 0xa8, 0xf1, 0x2a, 0x4a, 0xcd, 0xb5, 0x3c, 0xdd, 0xa1, 0xdb, 0x6f, 0xdb, 0xed,
+ 0xd6, 0x0c, 0x2e, 0x75, 0xb3, 0xe5, 0xc1, 0x3f, 0xb0, 0x1e, 0x49, 0x1a, 0x92, 0x8e, 0x2f, 0x93,
+ 0x8a, 0x22, 0xe6, 0x73, 0xcc, 0x6a, 0x7b, 0xb0, 0x3e, 0x51, 0x09, 0x42, 0x30, 0x2b, 0x68, 0x8f,
+ 0x39, 0xf9, 0x45, 0xec, 0xbe, 0x6b, 0x5f, 0x67, 0xa1, 0x94, 0x2b, 0x86, 0xf6, 0xa1, 0x68, 0xd2,
+ 0x0e, 0x26, 0xe7, 0xfc, 0x3b, 0x2f, 0x6c, 0xd8, 0x5e, 0x9c, 0xe1, 0xd0, 0x03, 0xf8, 0x4b, 0xf4,
+ 0x7b, 0xa4, 0x4b, 0xa3, 0x48, 0xdb, 0x33, 0x29, 0xc3, 0x42, 0x77, 0xaa, 0x02, 0x5e, 0x12, 0xfd,
+ 0xde, 0xa1, 0x8d, 0xb7, 0x7d, 0x18, 0xed, 0x02, 0xca, 0xb0, 0xe7, 0x5c, 0x70, 0x7d, 0xc9, 0xc2,
+ 0x6a, 0xc1, 0x81, 0x97, 0x53, 0xf0, 0x71, 0x12, 0x47, 0x04, 0x1a, 0xa3, 0x68, 0x72, 0xcd, 0xcd,
+ 0x25, 0x09, 0x95, 0x8c, 0xc9, 0xb9, 0x54, 0x44, 0x51, 0xc3, 0x48, 0xc4, 0x7b, 0xdc, 0x70, 0x71,
+ 0x51, 0x9d, 0x75, 0x4c, 0xf7, 0x6f, 0x33, 0xbd, 0xe7, 0xe6, 0xf2, 0x48, 0xc9, 0xf8, 0x58, 0x2a,
+ 0x4c, 0x0d, 0x3b, 0x4d, 0xe0, 0x88, 0xc2, 0xde, 0x9d, 0x05, 0x72, 0xed, 0xb6, 0x15, 0xe6, 0x5c,
+ 0x85, 0xfa, 0x94, 0x0a, 0x59, 0xef, 0x6d, 0x89, 0x0f, 0xf0, 0x70, 0x52, 0x89, 0xe4, 0x19, 0x9c,
+ 0x53, 0x1e, 0xb1, 0x90, 0x18, 0x49, 0x34, 0x13, 0x61, 0x75, 0xde, 0x15, 0xd8, 0x19, 0x57, 0xc0,
+ 0x5f, 0xd5, 0xb1, 0xc3, 0x9f, 0xc9, 0x36, 0x13, 0x21, 0x6a, 0xc1, 0xff, 0x63, 0xe8, 0xaf, 0x84,
+ 0xbc, 0x16, 0x44, 0xb1, 0x2e, 0xe3, 0x03, 0x16, 0x56, 0x17, 0x1c, 0xe5, 0xd6, 0x6d, 0xca, 0x37,
+ 0x16, 0x85, 0x13, 0x50, 0xed, 0x47, 0x00, 0x2b, 0x37, 0x9e, 0x8d, 0x8e, 0xa5, 0xd0, 0x0c, 0xb5,
+ 0x61, 0x39, 0x73, 0x80, 0x8f, 0x25, 0x4f, 0x63, 0xe7, 0x2e, 0x0b, 0x78, 0x74, 0x6b, 0x06, 0x2f,
+ 0x0d, 0x3d, 0x90, 0x90, 0x3e, 0x83, 0x92, 0x66, 0x6a, 0xc0, 0x14, 0x89, 0xb8, 0x36, 0x89, 0x07,
+ 0x56, 0xf3, 0x7c, 0x6d, 0xb7, 0x7d, 0xca, 0x9d, 0x87, 0x40, 0x0f, 0x57, 0x07, 0x9b, 0xb0, 0x71,
+ 0xcb, 0x01, 0x9e, 0xd3, 0x5b, 0xe0, 0x5b, 0x00, 0x1b, 0x93, 0xa5, 0xa0, 0x27, 0xb0, 0x9a, 0x4f,
+ 0x56, 0x24, 0x64, 0x11, 0xbb, 0xa0, 0x26, 0xb5, 0x45, 0x25, 0xca, 0x92, 0xd4, 0x51, 0xb2, 0x87,
+ 0xde, 0xc1, 0x66, 0xde, 0xb2, 0x44, 0xb1, 0x58, 0x2a, 0x43, 0xb8, 0x30, 0x4c, 0x0d, 0x68, 0x94,
+ 0xc8, 0xaf, 0xe4, 0xe5, 0xa7, 0x43, 0x0c, 0xaf, 0xe7, 0xdc, 0x8b, 0x5d, 0xde, 0x49, 0x92, 0x56,
+ 0xfb, 0x12, 0x00, 0x64, 0xc7, 0x44, 0xbb, 0x76, 0x62, 0xd9, 0x95, 0x9d, 0x58, 0x85, 0x7a, 0xa9,
+ 0x89, 0x46, 0xfb, 0x81, 0x53, 0x08, 0x7a, 0x0d, 0x2b, 0xec, 0x53, 0xcc, 0x7d, 0x95, 0x4c, 0x4a,
+ 0x61, 0x8a, 0x14, 0x94, 0x25, 0x0c, 0x35, 0xfc, 0x0c, 0x60, 0xde, 0x53, 0xa3, 0x2d, 0x00, 0x1e,
+ 0x13, 0x1a, 0x86, 0x8a, 0x69, 0x3f, 0x34, 0xcb, 0xb8, 0xc8, 0xe3, 0x57, 0x3e, 0x60, 0xe7, 0x87,
+ 0x55, 0x9f, 0x4c, 0x4d, 0xf7, 0x6d, 0xed, 0x7c, 0xe3, 0x2e, 0x8c, 0xbc, 0x62, 0xc2, 0x69, 0x28,
+ 0xe2, 0xe5, 0x5c, 0x2b, 0xcf, 0x6c, 0x1c, 0xed, 0xc3, 0xea, 0x14, 0xdb, 0x2e, 0xe2, 0x95, 0x70,
+ 0x8c, 0x45, 0x9f, 0xc2, 0xda, 0x34, 0x2b, 0x2e, 0xe2, 0x4a, 0x38, 0xc6, 0x76, 0xcd, 0x0e, 0x94,
+ 0x73, 0xf7, 0xaf, 0x10, 0x86, 0x52, 0xf2, 0x6d, 0xc3, 0xe8, 0xdf, 0x7c, 0x83, 0x46, 0x87, 0xe5,
+ 0xc6, 0x7f, 0x13, 0xf7, 0xfd, 0x43, 0xaa, 0x07, 0x8f, 0x82, 0xce, 0xbc, 0xfb, 0x7d, 0xed, 0xff,
+ 0x0a, 0x00, 0x00, 0xff, 0xff, 0x64, 0xbf, 0xda, 0x5e, 0xce, 0x06, 0x00, 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.proto b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.proto
new file mode 100644
index 00000000..b13b3438
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclb/grpc_lb_v1/grpclb.proto
@@ -0,0 +1,164 @@
+// Copyright 2016 gRPC authors.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+syntax = "proto3";
+
+package grpc.lb.v1;
+
+message Duration {
+ // Signed seconds of the span of time. Must be from -315,576,000,000
+ // to +315,576,000,000 inclusive.
+ int64 seconds = 1;
+
+ // Signed fractions of a second at nanosecond resolution of the span
+ // of time. Durations less than one second are represented with a 0
+ // `seconds` field and a positive or negative `nanos` field. For durations
+ // of one second or more, a non-zero value for the `nanos` field must be
+ // of the same sign as the `seconds` field. Must be from -999,999,999
+ // to +999,999,999 inclusive.
+ int32 nanos = 2;
+}
+
+message Timestamp {
+
+ // Represents seconds of UTC time since Unix epoch
+ // 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
+ // 9999-12-31T23:59:59Z inclusive.
+ int64 seconds = 1;
+
+ // Non-negative fractions of a second at nanosecond resolution. Negative
+ // second values with fractions must still have non-negative nanos values
+ // that count forward in time. Must be from 0 to 999,999,999
+ // inclusive.
+ int32 nanos = 2;
+}
+
+service LoadBalancer {
+ // Bidirectional rpc to get a list of servers.
+ rpc BalanceLoad(stream LoadBalanceRequest)
+ returns (stream LoadBalanceResponse);
+}
+
+message LoadBalanceRequest {
+ oneof load_balance_request_type {
+ // This message should be sent on the first request to the load balancer.
+ InitialLoadBalanceRequest initial_request = 1;
+
+ // The client stats should be periodically reported to the load balancer
+ // based on the duration defined in the InitialLoadBalanceResponse.
+ ClientStats client_stats = 2;
+ }
+}
+
+message InitialLoadBalanceRequest {
+ // Name of load balanced service (IE, balancer.service.com)
+ // length should be less than 256 bytes.
+ string name = 1;
+}
+
+// Contains client level statistics that are useful to load balancing. Each
+// count except the timestamp should be reset to zero after reporting the stats.
+message ClientStats {
+ // The timestamp of generating the report.
+ Timestamp timestamp = 1;
+
+ // The total number of RPCs that started.
+ int64 num_calls_started = 2;
+
+ // The total number of RPCs that finished.
+ int64 num_calls_finished = 3;
+
+ // The total number of RPCs that were dropped by the client because of rate
+ // limiting.
+ int64 num_calls_finished_with_drop_for_rate_limiting = 4;
+
+ // The total number of RPCs that were dropped by the client because of load
+ // balancing.
+ int64 num_calls_finished_with_drop_for_load_balancing = 5;
+
+ // The total number of RPCs that failed to reach a server except dropped RPCs.
+ int64 num_calls_finished_with_client_failed_to_send = 6;
+
+ // The total number of RPCs that finished and are known to have been received
+ // by a server.
+ int64 num_calls_finished_known_received = 7;
+}
+
+message LoadBalanceResponse {
+ oneof load_balance_response_type {
+ // This message should be sent on the first response to the client.
+ InitialLoadBalanceResponse initial_response = 1;
+
+ // Contains the list of servers selected by the load balancer. The client
+ // should send requests to these servers in the specified order.
+ ServerList server_list = 2;
+ }
+}
+
+message InitialLoadBalanceResponse {
+ // This is an application layer redirect that indicates the client should use
+ // the specified server for load balancing. When this field is non-empty in
+ // the response, the client should open a separate connection to the
+ // load_balancer_delegate and call the BalanceLoad method. Its length should
+ // be less than 64 bytes.
+ string load_balancer_delegate = 1;
+
+ // This interval defines how often the client should send the client stats
+ // to the load balancer. Stats should only be reported when the duration is
+ // positive.
+ Duration client_stats_report_interval = 2;
+}
+
+message ServerList {
+ // Contains a list of servers selected by the load balancer. The list will
+ // be updated when server resolutions change or as needed to balance load
+ // across more servers. The client should consume the server list in order
+ // unless instructed otherwise via the client_config.
+ repeated Server servers = 1;
+
+ // Indicates the amount of time that the client should consider this server
+ // list as valid. It may be considered stale after waiting this interval of
+ // time after receiving the list. If the interval is not positive, the
+ // client can assume the list is valid until the next list is received.
+ Duration expiration_interval = 3;
+}
+
+// Contains server information. When none of the [drop_for_*] fields are true,
+// use the other fields. When drop_for_rate_limiting is true, ignore all other
+// fields. Use drop_for_load_balancing only when it is true and
+// drop_for_rate_limiting is false.
+message Server {
+ // A resolved address for the server, serialized in network-byte-order. It may
+ // either be an IPv4 or IPv6 address.
+ bytes ip_address = 1;
+
+ // A resolved port number for the server.
+ int32 port = 2;
+
+ // An opaque but printable token given to the frontend for each pick. All
+ // frontend requests for that pick must include the token in its initial
+ // metadata. The token is used by the backend to verify the request and to
+ // allow the backend to report load to the gRPC LB system.
+ //
+ // Its length is variable but less than 50 bytes.
+ string load_balance_token = 3;
+
+ // Indicates whether this particular request should be dropped by the client
+ // for rate limiting.
+ bool drop_for_rate_limiting = 4;
+
+ // Indicates whether this particular request should be dropped by the client
+ // for load balancing.
+ bool drop_for_load_balancing = 5;
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/grpclog.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/grpclog.go
new file mode 100644
index 00000000..16a7d888
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/grpclog.go
@@ -0,0 +1,123 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// Package grpclog defines logging for grpc.
+//
+// All logs in transport package only go to verbose level 2.
+// All logs in other packages in grpc are logged in spite of the verbosity level.
+//
+// In the default logger,
+// severity level can be set by environment variable GRPC_GO_LOG_SEVERITY_LEVEL,
+// verbosity level can be set by GRPC_GO_LOG_VERBOSITY_LEVEL.
+package grpclog // import "google.golang.org/grpc/grpclog"
+
+import "os"
+
+var logger = newLoggerV2()
+
+// V reports whether verbosity level l is at least the requested verbose level.
+func V(l int) bool {
+ return logger.V(l)
+}
+
+// Info logs to the INFO log.
+func Info(args ...interface{}) {
+ logger.Info(args...)
+}
+
+// Infof logs to the INFO log. Arguments are handled in the manner of fmt.Printf.
+func Infof(format string, args ...interface{}) {
+ logger.Infof(format, args...)
+}
+
+// Infoln logs to the INFO log. Arguments are handled in the manner of fmt.Println.
+func Infoln(args ...interface{}) {
+ logger.Infoln(args...)
+}
+
+// Warning logs to the WARNING log.
+func Warning(args ...interface{}) {
+ logger.Warning(args...)
+}
+
+// Warningf logs to the WARNING log. Arguments are handled in the manner of fmt.Printf.
+func Warningf(format string, args ...interface{}) {
+ logger.Warningf(format, args...)
+}
+
+// Warningln logs to the WARNING log. Arguments are handled in the manner of fmt.Println.
+func Warningln(args ...interface{}) {
+ logger.Warningln(args...)
+}
+
+// Error logs to the ERROR log.
+func Error(args ...interface{}) {
+ logger.Error(args...)
+}
+
+// Errorf logs to the ERROR log. Arguments are handled in the manner of fmt.Printf.
+func Errorf(format string, args ...interface{}) {
+ logger.Errorf(format, args...)
+}
+
+// Errorln logs to the ERROR log. Arguments are handled in the manner of fmt.Println.
+func Errorln(args ...interface{}) {
+ logger.Errorln(args...)
+}
+
+// Fatal logs to the FATAL log. Arguments are handled in the manner of fmt.Print.
+// It calls os.Exit() with exit code 1.
+func Fatal(args ...interface{}) {
+ logger.Fatal(args...)
+ // Make sure fatal logs will exit.
+ os.Exit(1)
+}
+
+// Fatalf logs to the FATAL log. Arguments are handled in the manner of fmt.Printf.
+// It calles os.Exit() with exit code 1.
+func Fatalf(format string, args ...interface{}) {
+ logger.Fatalf(format, args...)
+ // Make sure fatal logs will exit.
+ os.Exit(1)
+}
+
+// Fatalln logs to the FATAL log. Arguments are handled in the manner of fmt.Println.
+// It calle os.Exit()) with exit code 1.
+func Fatalln(args ...interface{}) {
+ logger.Fatalln(args...)
+ // Make sure fatal logs will exit.
+ os.Exit(1)
+}
+
+// Print prints to the logger. Arguments are handled in the manner of fmt.Print.
+// Deprecated: use Info.
+func Print(args ...interface{}) {
+ logger.Info(args...)
+}
+
+// Printf prints to the logger. Arguments are handled in the manner of fmt.Printf.
+// Deprecated: use Infof.
+func Printf(format string, args ...interface{}) {
+ logger.Infof(format, args...)
+}
+
+// Println prints to the logger. Arguments are handled in the manner of fmt.Println.
+// Deprecated: use Infoln.
+func Println(args ...interface{}) {
+ logger.Infoln(args...)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/logger.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/logger.go
index 3b293307..d03b2397 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/logger.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/logger.go
@@ -1,52 +1,25 @@
/*
*
- * Copyright 2015, Google Inc.
- * All rights reserved.
+ * Copyright 2015 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
-/*
-Package grpclog defines logging for grpc.
-*/
-package grpclog // import "google.golang.org/grpc/grpclog"
-
-import (
- "log"
- "os"
-)
-
-// Use golang's standard logger by default.
-// Access is not mutex-protected: do not modify except in init()
-// functions.
-var logger Logger = log.New(os.Stderr, "", log.LstdFlags)
+package grpclog
// Logger mimics golang's standard Logger as an interface.
+// Deprecated: use LoggerV2.
type Logger interface {
Fatal(args ...interface{})
Fatalf(format string, args ...interface{})
@@ -58,36 +31,53 @@ type Logger interface {
// SetLogger sets the logger that is used in grpc. Call only from
// init() functions.
+// Deprecated: use SetLoggerV2.
func SetLogger(l Logger) {
- logger = l
+ logger = &loggerWrapper{Logger: l}
+}
+
+// loggerWrapper wraps Logger into a LoggerV2.
+type loggerWrapper struct {
+ Logger
+}
+
+func (g *loggerWrapper) Info(args ...interface{}) {
+ g.Logger.Print(args...)
+}
+
+func (g *loggerWrapper) Infoln(args ...interface{}) {
+ g.Logger.Println(args...)
+}
+
+func (g *loggerWrapper) Infof(format string, args ...interface{}) {
+ g.Logger.Printf(format, args...)
+}
+
+func (g *loggerWrapper) Warning(args ...interface{}) {
+ g.Logger.Print(args...)
}
-// Fatal is equivalent to Print() followed by a call to os.Exit() with a non-zero exit code.
-func Fatal(args ...interface{}) {
- logger.Fatal(args...)
+func (g *loggerWrapper) Warningln(args ...interface{}) {
+ g.Logger.Println(args...)
}
-// Fatalf is equivalent to Printf() followed by a call to os.Exit() with a non-zero exit code.
-func Fatalf(format string, args ...interface{}) {
- logger.Fatalf(format, args...)
+func (g *loggerWrapper) Warningf(format string, args ...interface{}) {
+ g.Logger.Printf(format, args...)
}
-// Fatalln is equivalent to Println() followed by a call to os.Exit()) with a non-zero exit code.
-func Fatalln(args ...interface{}) {
- logger.Fatalln(args...)
+func (g *loggerWrapper) Error(args ...interface{}) {
+ g.Logger.Print(args...)
}
-// Print prints to the logger. Arguments are handled in the manner of fmt.Print.
-func Print(args ...interface{}) {
- logger.Print(args...)
+func (g *loggerWrapper) Errorln(args ...interface{}) {
+ g.Logger.Println(args...)
}
-// Printf prints to the logger. Arguments are handled in the manner of fmt.Printf.
-func Printf(format string, args ...interface{}) {
- logger.Printf(format, args...)
+func (g *loggerWrapper) Errorf(format string, args ...interface{}) {
+ g.Logger.Printf(format, args...)
}
-// Println prints to the logger. Arguments are handled in the manner of fmt.Println.
-func Println(args ...interface{}) {
- logger.Println(args...)
+func (g *loggerWrapper) V(l int) bool {
+ // Returns true for all verbose level.
+ return true
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/loggerv2.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/loggerv2.go
new file mode 100644
index 00000000..d4932577
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/grpclog/loggerv2.go
@@ -0,0 +1,195 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpclog
+
+import (
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+ "strconv"
+)
+
+// LoggerV2 does underlying logging work for grpclog.
+type LoggerV2 interface {
+ // Info logs to INFO log. Arguments are handled in the manner of fmt.Print.
+ Info(args ...interface{})
+ // Infoln logs to INFO log. Arguments are handled in the manner of fmt.Println.
+ Infoln(args ...interface{})
+ // Infof logs to INFO log. Arguments are handled in the manner of fmt.Printf.
+ Infof(format string, args ...interface{})
+ // Warning logs to WARNING log. Arguments are handled in the manner of fmt.Print.
+ Warning(args ...interface{})
+ // Warningln logs to WARNING log. Arguments are handled in the manner of fmt.Println.
+ Warningln(args ...interface{})
+ // Warningf logs to WARNING log. Arguments are handled in the manner of fmt.Printf.
+ Warningf(format string, args ...interface{})
+ // Error logs to ERROR log. Arguments are handled in the manner of fmt.Print.
+ Error(args ...interface{})
+ // Errorln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
+ Errorln(args ...interface{})
+ // Errorf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
+ Errorf(format string, args ...interface{})
+ // Fatal logs to ERROR log. Arguments are handled in the manner of fmt.Print.
+ // gRPC ensures that all Fatal logs will exit with os.Exit(1).
+ // Implementations may also call os.Exit() with a non-zero exit code.
+ Fatal(args ...interface{})
+ // Fatalln logs to ERROR log. Arguments are handled in the manner of fmt.Println.
+ // gRPC ensures that all Fatal logs will exit with os.Exit(1).
+ // Implementations may also call os.Exit() with a non-zero exit code.
+ Fatalln(args ...interface{})
+ // Fatalf logs to ERROR log. Arguments are handled in the manner of fmt.Printf.
+ // gRPC ensures that all Fatal logs will exit with os.Exit(1).
+ // Implementations may also call os.Exit() with a non-zero exit code.
+ Fatalf(format string, args ...interface{})
+ // V reports whether verbosity level l is at least the requested verbose level.
+ V(l int) bool
+}
+
+// SetLoggerV2 sets logger that is used in grpc to a V2 logger.
+// Not mutex-protected, should be called before any gRPC functions.
+func SetLoggerV2(l LoggerV2) {
+ logger = l
+}
+
+const (
+ // infoLog indicates Info severity.
+ infoLog int = iota
+ // warningLog indicates Warning severity.
+ warningLog
+ // errorLog indicates Error severity.
+ errorLog
+ // fatalLog indicates Fatal severity.
+ fatalLog
+)
+
+// severityName contains the string representation of each severity.
+var severityName = []string{
+ infoLog: "INFO",
+ warningLog: "WARNING",
+ errorLog: "ERROR",
+ fatalLog: "FATAL",
+}
+
+// loggerT is the default logger used by grpclog.
+type loggerT struct {
+ m []*log.Logger
+ v int
+}
+
+// NewLoggerV2 creates a loggerV2 with the provided writers.
+// Fatal logs will be written to errorW, warningW, infoW, followed by exit(1).
+// Error logs will be written to errorW, warningW and infoW.
+// Warning logs will be written to warningW and infoW.
+// Info logs will be written to infoW.
+func NewLoggerV2(infoW, warningW, errorW io.Writer) LoggerV2 {
+ return NewLoggerV2WithVerbosity(infoW, warningW, errorW, 0)
+}
+
+// NewLoggerV2WithVerbosity creates a loggerV2 with the provided writers and
+// verbosity level.
+func NewLoggerV2WithVerbosity(infoW, warningW, errorW io.Writer, v int) LoggerV2 {
+ var m []*log.Logger
+ m = append(m, log.New(infoW, severityName[infoLog]+": ", log.LstdFlags))
+ m = append(m, log.New(io.MultiWriter(infoW, warningW), severityName[warningLog]+": ", log.LstdFlags))
+ ew := io.MultiWriter(infoW, warningW, errorW) // ew will be used for error and fatal.
+ m = append(m, log.New(ew, severityName[errorLog]+": ", log.LstdFlags))
+ m = append(m, log.New(ew, severityName[fatalLog]+": ", log.LstdFlags))
+ return &loggerT{m: m, v: v}
+}
+
+// newLoggerV2 creates a loggerV2 to be used as default logger.
+// All logs are written to stderr.
+func newLoggerV2() LoggerV2 {
+ errorW := ioutil.Discard
+ warningW := ioutil.Discard
+ infoW := ioutil.Discard
+
+ logLevel := os.Getenv("GRPC_GO_LOG_SEVERITY_LEVEL")
+ switch logLevel {
+ case "", "ERROR", "error": // If env is unset, set level to ERROR.
+ errorW = os.Stderr
+ case "WARNING", "warning":
+ warningW = os.Stderr
+ case "INFO", "info":
+ infoW = os.Stderr
+ }
+
+ var v int
+ vLevel := os.Getenv("GRPC_GO_LOG_VERBOSITY_LEVEL")
+ if vl, err := strconv.Atoi(vLevel); err == nil {
+ v = vl
+ }
+ return NewLoggerV2WithVerbosity(infoW, warningW, errorW, v)
+}
+
+func (g *loggerT) Info(args ...interface{}) {
+ g.m[infoLog].Print(args...)
+}
+
+func (g *loggerT) Infoln(args ...interface{}) {
+ g.m[infoLog].Println(args...)
+}
+
+func (g *loggerT) Infof(format string, args ...interface{}) {
+ g.m[infoLog].Printf(format, args...)
+}
+
+func (g *loggerT) Warning(args ...interface{}) {
+ g.m[warningLog].Print(args...)
+}
+
+func (g *loggerT) Warningln(args ...interface{}) {
+ g.m[warningLog].Println(args...)
+}
+
+func (g *loggerT) Warningf(format string, args ...interface{}) {
+ g.m[warningLog].Printf(format, args...)
+}
+
+func (g *loggerT) Error(args ...interface{}) {
+ g.m[errorLog].Print(args...)
+}
+
+func (g *loggerT) Errorln(args ...interface{}) {
+ g.m[errorLog].Println(args...)
+}
+
+func (g *loggerT) Errorf(format string, args ...interface{}) {
+ g.m[errorLog].Printf(format, args...)
+}
+
+func (g *loggerT) Fatal(args ...interface{}) {
+ g.m[fatalLog].Fatal(args...)
+ // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit().
+}
+
+func (g *loggerT) Fatalln(args ...interface{}) {
+ g.m[fatalLog].Fatalln(args...)
+ // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit().
+}
+
+func (g *loggerT) Fatalf(format string, args ...interface{}) {
+ g.m[fatalLog].Fatalf(format, args...)
+ // No need to call os.Exit() again because log.Logger.Fatal() calls os.Exit().
+}
+
+func (g *loggerT) V(l int) bool {
+ return l <= g.v
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go
new file mode 100644
index 00000000..89c4d459
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.pb.go
@@ -0,0 +1,176 @@
+// Code generated by protoc-gen-go.
+// source: health.proto
+// DO NOT EDIT!
+
+/*
+Package grpc_health_v1 is a generated protocol buffer package.
+
+It is generated from these files:
+ health.proto
+
+It has these top-level messages:
+ HealthCheckRequest
+ HealthCheckResponse
+*/
+package grpc_health_v1
+
+import proto "github.com/golang/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+import (
+ context "golang.org/x/net/context"
+ grpc "google.golang.org/grpc"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+
+type HealthCheckResponse_ServingStatus int32
+
+const (
+ HealthCheckResponse_UNKNOWN HealthCheckResponse_ServingStatus = 0
+ HealthCheckResponse_SERVING HealthCheckResponse_ServingStatus = 1
+ HealthCheckResponse_NOT_SERVING HealthCheckResponse_ServingStatus = 2
+)
+
+var HealthCheckResponse_ServingStatus_name = map[int32]string{
+ 0: "UNKNOWN",
+ 1: "SERVING",
+ 2: "NOT_SERVING",
+}
+var HealthCheckResponse_ServingStatus_value = map[string]int32{
+ "UNKNOWN": 0,
+ "SERVING": 1,
+ "NOT_SERVING": 2,
+}
+
+func (x HealthCheckResponse_ServingStatus) String() string {
+ return proto.EnumName(HealthCheckResponse_ServingStatus_name, int32(x))
+}
+func (HealthCheckResponse_ServingStatus) EnumDescriptor() ([]byte, []int) {
+ return fileDescriptor0, []int{1, 0}
+}
+
+type HealthCheckRequest struct {
+ Service string `protobuf:"bytes,1,opt,name=service" json:"service,omitempty"`
+}
+
+func (m *HealthCheckRequest) Reset() { *m = HealthCheckRequest{} }
+func (m *HealthCheckRequest) String() string { return proto.CompactTextString(m) }
+func (*HealthCheckRequest) ProtoMessage() {}
+func (*HealthCheckRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+
+type HealthCheckResponse struct {
+ Status HealthCheckResponse_ServingStatus `protobuf:"varint,1,opt,name=status,enum=grpc.health.v1.HealthCheckResponse_ServingStatus" json:"status,omitempty"`
+}
+
+func (m *HealthCheckResponse) Reset() { *m = HealthCheckResponse{} }
+func (m *HealthCheckResponse) String() string { return proto.CompactTextString(m) }
+func (*HealthCheckResponse) ProtoMessage() {}
+func (*HealthCheckResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+
+func init() {
+ proto.RegisterType((*HealthCheckRequest)(nil), "grpc.health.v1.HealthCheckRequest")
+ proto.RegisterType((*HealthCheckResponse)(nil), "grpc.health.v1.HealthCheckResponse")
+ proto.RegisterEnum("grpc.health.v1.HealthCheckResponse_ServingStatus", HealthCheckResponse_ServingStatus_name, HealthCheckResponse_ServingStatus_value)
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// Client API for Health service
+
+type HealthClient interface {
+ Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error)
+}
+
+type healthClient struct {
+ cc *grpc.ClientConn
+}
+
+func NewHealthClient(cc *grpc.ClientConn) HealthClient {
+ return &healthClient{cc}
+}
+
+func (c *healthClient) Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) {
+ out := new(HealthCheckResponse)
+ err := grpc.Invoke(ctx, "/grpc.health.v1.Health/Check", in, out, c.cc, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// Server API for Health service
+
+type HealthServer interface {
+ Check(context.Context, *HealthCheckRequest) (*HealthCheckResponse, error)
+}
+
+func RegisterHealthServer(s *grpc.Server, srv HealthServer) {
+ s.RegisterService(&_Health_serviceDesc, srv)
+}
+
+func _Health_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(HealthCheckRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(HealthServer).Check(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/grpc.health.v1.Health/Check",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(HealthServer).Check(ctx, req.(*HealthCheckRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _Health_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "grpc.health.v1.Health",
+ HandlerType: (*HealthServer)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Check",
+ Handler: _Health_Check_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "health.proto",
+}
+
+func init() { proto.RegisterFile("health.proto", fileDescriptor0) }
+
+var fileDescriptor0 = []byte{
+ // 204 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xc9, 0x48, 0x4d, 0xcc,
+ 0x29, 0xc9, 0xd0, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x4b, 0x2f, 0x2a, 0x48, 0xd6, 0x83,
+ 0x0a, 0x95, 0x19, 0x2a, 0xe9, 0x71, 0x09, 0x79, 0x80, 0x39, 0xce, 0x19, 0xa9, 0xc9, 0xd9, 0x41,
+ 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25, 0x42, 0x12, 0x5c, 0xec, 0xc5, 0xa9, 0x45, 0x65, 0x99, 0xc9,
+ 0xa9, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x30, 0xae, 0xd2, 0x1c, 0x46, 0x2e, 0x61, 0x14,
+ 0x0d, 0xc5, 0x05, 0xf9, 0x79, 0xc5, 0xa9, 0x42, 0x9e, 0x5c, 0x6c, 0xc5, 0x25, 0x89, 0x25, 0xa5,
+ 0xc5, 0x60, 0x0d, 0x7c, 0x46, 0x86, 0x7a, 0xa8, 0x16, 0xe9, 0x61, 0xd1, 0xa4, 0x17, 0x0c, 0x32,
+ 0x34, 0x2f, 0x3d, 0x18, 0xac, 0x31, 0x08, 0x6a, 0x80, 0x92, 0x15, 0x17, 0x2f, 0x8a, 0x84, 0x10,
+ 0x37, 0x17, 0x7b, 0xa8, 0x9f, 0xb7, 0x9f, 0x7f, 0xb8, 0x9f, 0x00, 0x03, 0x88, 0x13, 0xec, 0x1a,
+ 0x14, 0xe6, 0xe9, 0xe7, 0x2e, 0xc0, 0x28, 0xc4, 0xcf, 0xc5, 0xed, 0xe7, 0x1f, 0x12, 0x0f, 0x13,
+ 0x60, 0x32, 0x8a, 0xe2, 0x62, 0x83, 0x58, 0x24, 0x14, 0xc0, 0xc5, 0x0a, 0xb6, 0x4c, 0x48, 0x09,
+ 0xaf, 0x4b, 0xc0, 0xfe, 0x95, 0x52, 0x26, 0xc2, 0xb5, 0x49, 0x6c, 0xe0, 0x10, 0x34, 0x06, 0x04,
+ 0x00, 0x00, 0xff, 0xff, 0xac, 0x56, 0x2a, 0xcb, 0x51, 0x01, 0x00, 0x00,
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.proto b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.proto
new file mode 100644
index 00000000..6072fdc3
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/grpc_health_v1/health.proto
@@ -0,0 +1,34 @@
+// Copyright 2017 gRPC authors.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+syntax = "proto3";
+
+package grpc.health.v1;
+
+message HealthCheckRequest {
+ string service = 1;
+}
+
+message HealthCheckResponse {
+ enum ServingStatus {
+ UNKNOWN = 0;
+ SERVING = 1;
+ NOT_SERVING = 2;
+ }
+ ServingStatus status = 1;
+}
+
+service Health{
+ rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/health.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/health.go
new file mode 100644
index 00000000..4dccbc76
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/health/health.go
@@ -0,0 +1,70 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// Package health provides some utility functions to health-check a server. The implementation
+// is based on protobuf. Users need to write their own implementations if other IDLs are used.
+package health
+
+import (
+ "sync"
+
+ "golang.org/x/net/context"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ healthpb "google.golang.org/grpc/health/grpc_health_v1"
+)
+
+// Server implements `service Health`.
+type Server struct {
+ mu sync.Mutex
+ // statusMap stores the serving status of the services this Server monitors.
+ statusMap map[string]healthpb.HealthCheckResponse_ServingStatus
+}
+
+// NewServer returns a new Server.
+func NewServer() *Server {
+ return &Server{
+ statusMap: make(map[string]healthpb.HealthCheckResponse_ServingStatus),
+ }
+}
+
+// Check implements `service Health`.
+func (s *Server) Check(ctx context.Context, in *healthpb.HealthCheckRequest) (*healthpb.HealthCheckResponse, error) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if in.Service == "" {
+ // check the server overall health status.
+ return &healthpb.HealthCheckResponse{
+ Status: healthpb.HealthCheckResponse_SERVING,
+ }, nil
+ }
+ if status, ok := s.statusMap[in.Service]; ok {
+ return &healthpb.HealthCheckResponse{
+ Status: status,
+ }, nil
+ }
+ return nil, grpc.Errorf(codes.NotFound, "unknown service")
+}
+
+// SetServingStatus is called when need to reset the serving status of a service
+// or insert a new service entry into the statusMap.
+func (s *Server) SetServingStatus(service string, status healthpb.HealthCheckResponse_ServingStatus) {
+ s.mu.Lock()
+ s.statusMap[service] = status
+ s.mu.Unlock()
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/interceptor.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/interceptor.go
index 8d932efe..06dc825b 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/interceptor.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/interceptor.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -40,17 +25,17 @@ import (
// UnaryInvoker is called by UnaryClientInterceptor to complete RPCs.
type UnaryInvoker func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error
-// UnaryClientInterceptor intercepts the execution of a unary RPC on the client. inovker is the handler to complete the RPC
+// UnaryClientInterceptor intercepts the execution of a unary RPC on the client. invoker is the handler to complete the RPC
// and it is the responsibility of the interceptor to call it.
-// This is the EXPERIMENTAL API.
+// This is an EXPERIMENTAL API.
type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error
// Streamer is called by StreamClientInterceptor to create a ClientStream.
type Streamer func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (ClientStream, error)
// StreamClientInterceptor intercepts the creation of ClientStream. It may return a custom ClientStream to intercept all I/O
-// operations. streamer is the handlder to create a ClientStream and it is the responsibility of the interceptor to call it.
-// This is the EXPERIMENTAL API.
+// operations. streamer is the handler to create a ClientStream and it is the responsibility of the interceptor to call it.
+// This is an EXPERIMENTAL API.
type StreamClientInterceptor func(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, streamer Streamer, opts ...CallOption) (ClientStream, error)
// UnaryServerInfo consists of various information about a unary RPC on
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/internal/internal.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/internal/internal.go
index 5489143a..07083832 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/internal/internal.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/internal/internal.go
@@ -1,32 +1,17 @@
/*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/keepalive/keepalive.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/keepalive/keepalive.go
new file mode 100644
index 00000000..f8adc7e6
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/keepalive/keepalive.go
@@ -0,0 +1,65 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// Package keepalive defines configurable parameters for point-to-point healthcheck.
+package keepalive
+
+import (
+ "time"
+)
+
+// ClientParameters is used to set keepalive parameters on the client-side.
+// These configure how the client will actively probe to notice when a connection is broken
+// and send pings so intermediaries will be aware of the liveness of the connection.
+// Make sure these parameters are set in coordination with the keepalive policy on the server,
+// as incompatible settings can result in closing of connection.
+type ClientParameters struct {
+ // After a duration of this time if the client doesn't see any activity it pings the server to see if the transport is still alive.
+ Time time.Duration // The current default value is infinity.
+ // After having pinged for keepalive check, the client waits for a duration of Timeout and if no activity is seen even after that
+ // the connection is closed.
+ Timeout time.Duration // The current default value is 20 seconds.
+ // If true, client runs keepalive checks even with no active RPCs.
+ PermitWithoutStream bool // false by default.
+}
+
+// ServerParameters is used to set keepalive and max-age parameters on the server-side.
+type ServerParameters struct {
+ // MaxConnectionIdle is a duration for the amount of time after which an idle connection would be closed by sending a GoAway.
+ // Idleness duration is defined since the most recent time the number of outstanding RPCs became zero or the connection establishment.
+ MaxConnectionIdle time.Duration // The current default value is infinity.
+ // MaxConnectionAge is a duration for the maximum amount of time a connection may exist before it will be closed by sending a GoAway.
+ // A random jitter of +/-10% will be added to MaxConnectionAge to spread out connection storms.
+ MaxConnectionAge time.Duration // The current default value is infinity.
+ // MaxConnectinoAgeGrace is an additive period after MaxConnectionAge after which the connection will be forcibly closed.
+ MaxConnectionAgeGrace time.Duration // The current default value is infinity.
+ // After a duration of this time if the server doesn't see any activity it pings the client to see if the transport is still alive.
+ Time time.Duration // The current default value is 2 hours.
+ // After having pinged for keepalive check, the server waits for a duration of Timeout and if no activity is seen even after that
+ // the connection is closed.
+ Timeout time.Duration // The current default value is 20 seconds.
+}
+
+// EnforcementPolicy is used to set keepalive enforcement policy on the server-side.
+// Server will close connection with a client that violates this policy.
+type EnforcementPolicy struct {
+ // MinTime is the minimum amount of time a client should wait before sending a keepalive ping.
+ MinTime time.Duration // The current default value is 5 minutes.
+ // If true, server expects keepalive pings even when there are no active streams(RPCs).
+ PermitWithoutStream bool // false by default.
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/metadata/metadata.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/metadata/metadata.go
index 65dc5af5..be4f9e73 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/metadata/metadata.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/metadata/metadata.go
@@ -1,91 +1,53 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
// Package metadata define the structure of the metadata supported by gRPC library.
+// Please refer to https://grpc.io/docs/guides/wire.html for more information about custom-metadata.
package metadata // import "google.golang.org/grpc/metadata"
import (
- "encoding/base64"
"fmt"
"strings"
"golang.org/x/net/context"
)
-const (
- binHdrSuffix = "-bin"
-)
-
-// encodeKeyValue encodes key and value qualified for transmission via gRPC.
-// Transmitting binary headers violates HTTP/2 spec.
-// TODO(zhaoq): Maybe check if k is ASCII also.
-func encodeKeyValue(k, v string) (string, string) {
- k = strings.ToLower(k)
- if strings.HasSuffix(k, binHdrSuffix) {
- val := base64.StdEncoding.EncodeToString([]byte(v))
- v = string(val)
- }
- return k, v
-}
-
-// DecodeKeyValue returns the original key and value corresponding to the
-// encoded data in k, v.
-// If k is a binary header and v contains comma, v is split on comma before decoded,
-// and the decoded v will be joined with comma before returned.
+// DecodeKeyValue returns k, v, nil. It is deprecated and should not be used.
func DecodeKeyValue(k, v string) (string, string, error) {
- if !strings.HasSuffix(k, binHdrSuffix) {
- return k, v, nil
- }
- vvs := strings.Split(v, ",")
- for i, vv := range vvs {
- val, err := base64.StdEncoding.DecodeString(vv)
- if err != nil {
- return "", "", err
- }
- vvs[i] = string(val)
- }
- return k, strings.Join(vvs, ","), nil
+ return k, v, nil
}
// MD is a mapping from metadata keys to values. Users should use the following
// two convenience functions New and Pairs to generate MD.
type MD map[string][]string
-// New creates a MD from given key-value map.
+// New creates an MD from a given key-value map.
+//
+// Only the following ASCII characters are allowed in keys:
+// - digits: 0-9
+// - uppercase letters: A-Z (normalized to lower)
+// - lowercase letters: a-z
+// - special characters: -_.
+// Uppercase letters are automatically converted to lowercase.
func New(m map[string]string) MD {
md := MD{}
- for k, v := range m {
- key, val := encodeKeyValue(k, v)
+ for k, val := range m {
+ key := strings.ToLower(k)
md[key] = append(md[key], val)
}
return md
@@ -93,19 +55,25 @@ func New(m map[string]string) MD {
// Pairs returns an MD formed by the mapping of key, value ...
// Pairs panics if len(kv) is odd.
+//
+// Only the following ASCII characters are allowed in keys:
+// - digits: 0-9
+// - uppercase letters: A-Z (normalized to lower)
+// - lowercase letters: a-z
+// - special characters: -_.
+// Uppercase letters are automatically converted to lowercase.
func Pairs(kv ...string) MD {
if len(kv)%2 == 1 {
panic(fmt.Sprintf("metadata: Pairs got the odd number of input pairs for metadata: %d", len(kv)))
}
md := MD{}
- var k string
+ var key string
for i, s := range kv {
if i%2 == 0 {
- k = s
+ key = strings.ToLower(s)
continue
}
- key, val := encodeKeyValue(k, s)
- md[key] = append(md[key], val)
+ md[key] = append(md[key], s)
}
return md
}
@@ -120,9 +88,9 @@ func (md MD) Copy() MD {
return Join(md)
}
-// Join joins any number of MDs into a single MD.
+// Join joins any number of mds into a single MD.
// The order of values for each key is determined by the order in which
-// the MDs containing those values are presented to Join.
+// the mds containing those values are presented to Join.
func Join(mds ...MD) MD {
out := MD{}
for _, md := range mds {
@@ -133,17 +101,41 @@ func Join(mds ...MD) MD {
return out
}
-type mdKey struct{}
+type mdIncomingKey struct{}
+type mdOutgoingKey struct{}
-// NewContext creates a new context with md attached.
+// NewContext is a wrapper for NewOutgoingContext(ctx, md). Deprecated.
func NewContext(ctx context.Context, md MD) context.Context {
- return context.WithValue(ctx, mdKey{}, md)
+ return NewOutgoingContext(ctx, md)
}
-// FromContext returns the MD in ctx if it exists.
-// The returned md should be immutable, writing to it may cause races.
-// Modification should be made to the copies of the returned md.
+// NewIncomingContext creates a new context with incoming md attached.
+func NewIncomingContext(ctx context.Context, md MD) context.Context {
+ return context.WithValue(ctx, mdIncomingKey{}, md)
+}
+
+// NewOutgoingContext creates a new context with outgoing md attached.
+func NewOutgoingContext(ctx context.Context, md MD) context.Context {
+ return context.WithValue(ctx, mdOutgoingKey{}, md)
+}
+
+// FromContext is a wrapper for FromIncomingContext(ctx). Deprecated.
func FromContext(ctx context.Context) (md MD, ok bool) {
- md, ok = ctx.Value(mdKey{}).(MD)
+ return FromIncomingContext(ctx)
+}
+
+// FromIncomingContext returns the incoming metadata in ctx if it exists. The
+// returned MD should not be modified. Writing to it may cause races.
+// Modification should be made to copies of the returned MD.
+func FromIncomingContext(ctx context.Context) (md MD, ok bool) {
+ md, ok = ctx.Value(mdIncomingKey{}).(MD)
+ return
+}
+
+// FromOutgoingContext returns the outgoing metadata in ctx if it exists. The
+// returned MD should not be modified. Writing to it may cause races.
+// Modification should be made to the copies of the returned MD.
+func FromOutgoingContext(ctx context.Context) (md MD, ok bool) {
+ md, ok = ctx.Value(mdOutgoingKey{}).(MD)
return
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/dns_resolver.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/dns_resolver.go
new file mode 100644
index 00000000..efd37e30
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/dns_resolver.go
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package naming
+
+import (
+ "errors"
+ "fmt"
+ "net"
+ "strconv"
+ "time"
+
+ "golang.org/x/net/context"
+ "google.golang.org/grpc/grpclog"
+)
+
+const (
+ defaultPort = "443"
+ defaultFreq = time.Minute * 30
+)
+
+var (
+ errMissingAddr = errors.New("missing address")
+ errWatcherClose = errors.New("watcher has been closed")
+)
+
+// NewDNSResolverWithFreq creates a DNS Resolver that can resolve DNS names, and
+// create watchers that poll the DNS server using the frequency set by freq.
+func NewDNSResolverWithFreq(freq time.Duration) (Resolver, error) {
+ return &dnsResolver{freq: freq}, nil
+}
+
+// NewDNSResolver creates a DNS Resolver that can resolve DNS names, and create
+// watchers that poll the DNS server using the default frequency defined by defaultFreq.
+func NewDNSResolver() (Resolver, error) {
+ return NewDNSResolverWithFreq(defaultFreq)
+}
+
+// dnsResolver handles name resolution for names following the DNS scheme
+type dnsResolver struct {
+ // frequency of polling the DNS server that the watchers created by this resolver will use.
+ freq time.Duration
+}
+
+// formatIP returns ok = false if addr is not a valid textual representation of an IP address.
+// If addr is an IPv4 address, return the addr and ok = true.
+// If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true.
+func formatIP(addr string) (addrIP string, ok bool) {
+ ip := net.ParseIP(addr)
+ if ip == nil {
+ return "", false
+ }
+ if ip.To4() != nil {
+ return addr, true
+ }
+ return "[" + addr + "]", true
+}
+
+// parseTarget takes the user input target string, returns formatted host and port info.
+// If target doesn't specify a port, set the port to be the defaultPort.
+// If target is in IPv6 format and host-name is enclosed in sqarue brackets, brackets
+// are strippd when setting the host.
+// examples:
+// target: "www.google.com" returns host: "www.google.com", port: "443"
+// target: "ipv4-host:80" returns host: "ipv4-host", port: "80"
+// target: "[ipv6-host]" returns host: "ipv6-host", port: "443"
+// target: ":80" returns host: "localhost", port: "80"
+// target: ":" returns host: "localhost", port: "443"
+func parseTarget(target string) (host, port string, err error) {
+ if target == "" {
+ return "", "", errMissingAddr
+ }
+
+ if ip := net.ParseIP(target); ip != nil {
+ // target is an IPv4 or IPv6(without brackets) address
+ return target, defaultPort, nil
+ }
+ if host, port, err := net.SplitHostPort(target); err == nil {
+ // target has port, i.e ipv4-host:port, [ipv6-host]:port, host-name:port
+ if host == "" {
+ // Keep consistent with net.Dial(): If the host is empty, as in ":80", the local system is assumed.
+ host = "localhost"
+ }
+ if port == "" {
+ // If the port field is empty(target ends with colon), e.g. "[::1]:", defaultPort is used.
+ port = defaultPort
+ }
+ return host, port, nil
+ }
+ if host, port, err := net.SplitHostPort(target + ":" + defaultPort); err == nil {
+ // target doesn't have port
+ return host, port, nil
+ }
+ return "", "", fmt.Errorf("invalid target address %v", target)
+}
+
+// Resolve creates a watcher that watches the name resolution of the target.
+func (r *dnsResolver) Resolve(target string) (Watcher, error) {
+ host, port, err := parseTarget(target)
+ if err != nil {
+ return nil, err
+ }
+
+ if net.ParseIP(host) != nil {
+ ipWatcher := &ipWatcher{
+ updateChan: make(chan *Update, 1),
+ }
+ host, _ = formatIP(host)
+ ipWatcher.updateChan <- &Update{Op: Add, Addr: host + ":" + port}
+ return ipWatcher, nil
+ }
+
+ ctx, cancel := context.WithCancel(context.Background())
+ return &dnsWatcher{
+ r: r,
+ host: host,
+ port: port,
+ ctx: ctx,
+ cancel: cancel,
+ t: time.NewTimer(0),
+ }, nil
+}
+
+// dnsWatcher watches for the name resolution update for a specific target
+type dnsWatcher struct {
+ r *dnsResolver
+ host string
+ port string
+ // The latest resolved address list
+ curAddrs []*Update
+ ctx context.Context
+ cancel context.CancelFunc
+ t *time.Timer
+}
+
+// ipWatcher watches for the name resolution update for an IP address.
+type ipWatcher struct {
+ updateChan chan *Update
+}
+
+// Next returns the adrress resolution Update for the target. For IP address,
+// the resolution is itself, thus polling name server is unncessary. Therefore,
+// Next() will return an Update the first time it is called, and will be blocked
+// for all following calls as no Update exisits until watcher is closed.
+func (i *ipWatcher) Next() ([]*Update, error) {
+ u, ok := <-i.updateChan
+ if !ok {
+ return nil, errWatcherClose
+ }
+ return []*Update{u}, nil
+}
+
+// Close closes the ipWatcher.
+func (i *ipWatcher) Close() {
+ close(i.updateChan)
+}
+
+// AddressType indicates the address type returned by name resolution.
+type AddressType uint8
+
+const (
+ // Backend indicates the server is a backend server.
+ Backend AddressType = iota
+ // GRPCLB indicates the server is a grpclb load balancer.
+ GRPCLB
+)
+
+// AddrMetadataGRPCLB contains the information the name resolver for grpclb should provide. The
+// name resolver used by the grpclb balancer is required to provide this type of metadata in
+// its address updates.
+type AddrMetadataGRPCLB struct {
+ // AddrType is the type of server (grpc load balancer or backend).
+ AddrType AddressType
+ // ServerName is the name of the grpc load balancer. Used for authentication.
+ ServerName string
+}
+
+// compileUpdate compares the old resolved addresses and newly resolved addresses,
+// and generates an update list
+func (w *dnsWatcher) compileUpdate(newAddrs []*Update) []*Update {
+ update := make(map[Update]bool)
+ for _, u := range newAddrs {
+ update[*u] = true
+ }
+ for _, u := range w.curAddrs {
+ if _, ok := update[*u]; ok {
+ delete(update, *u)
+ continue
+ }
+ update[Update{Addr: u.Addr, Op: Delete, Metadata: u.Metadata}] = true
+ }
+ res := make([]*Update, 0, len(update))
+ for k := range update {
+ tmp := k
+ res = append(res, &tmp)
+ }
+ return res
+}
+
+func (w *dnsWatcher) lookupSRV() []*Update {
+ var newAddrs []*Update
+ _, srvs, err := lookupSRV(w.ctx, "grpclb", "tcp", w.host)
+ if err != nil {
+ grpclog.Infof("grpc: failed dns SRV record lookup due to %v.\n", err)
+ return nil
+ }
+ for _, s := range srvs {
+ lbAddrs, err := lookupHost(w.ctx, s.Target)
+ if err != nil {
+ grpclog.Warningf("grpc: failed load banlacer address dns lookup due to %v.\n", err)
+ continue
+ }
+ for _, a := range lbAddrs {
+ a, ok := formatIP(a)
+ if !ok {
+ grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err)
+ continue
+ }
+ newAddrs = append(newAddrs, &Update{Addr: a + ":" + strconv.Itoa(int(s.Port)),
+ Metadata: AddrMetadataGRPCLB{AddrType: GRPCLB, ServerName: s.Target}})
+ }
+ }
+ return newAddrs
+}
+
+func (w *dnsWatcher) lookupHost() []*Update {
+ var newAddrs []*Update
+ addrs, err := lookupHost(w.ctx, w.host)
+ if err != nil {
+ grpclog.Warningf("grpc: failed dns A record lookup due to %v.\n", err)
+ return nil
+ }
+ for _, a := range addrs {
+ a, ok := formatIP(a)
+ if !ok {
+ grpclog.Errorf("grpc: failed IP parsing due to %v.\n", err)
+ continue
+ }
+ newAddrs = append(newAddrs, &Update{Addr: a + ":" + w.port})
+ }
+ return newAddrs
+}
+
+func (w *dnsWatcher) lookup() []*Update {
+ newAddrs := w.lookupSRV()
+ if newAddrs == nil {
+ // If failed to get any balancer address (either no corresponding SRV for the
+ // target, or caused by failure during resolution/parsing of the balancer target),
+ // return any A record info available.
+ newAddrs = w.lookupHost()
+ }
+ result := w.compileUpdate(newAddrs)
+ w.curAddrs = newAddrs
+ return result
+}
+
+// Next returns the resolved address update(delta) for the target. If there's no
+// change, it will sleep for 30 mins and try to resolve again after that.
+func (w *dnsWatcher) Next() ([]*Update, error) {
+ for {
+ select {
+ case <-w.ctx.Done():
+ return nil, errWatcherClose
+ case <-w.t.C:
+ }
+ result := w.lookup()
+ // Next lookup should happen after an interval defined by w.r.freq.
+ w.t.Reset(w.r.freq)
+ if len(result) > 0 {
+ return result, nil
+ }
+ }
+}
+
+func (w *dnsWatcher) Close() {
+ w.cancel()
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go17.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go17.go
new file mode 100644
index 00000000..a537b08c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go17.go
@@ -0,0 +1,34 @@
+// +build go1.6, !go1.8
+
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package naming
+
+import (
+ "net"
+
+ "golang.org/x/net/context"
+)
+
+var (
+ lookupHost = func(ctx context.Context, host string) ([]string, error) { return net.LookupHost(host) }
+ lookupSRV = func(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) {
+ return net.LookupSRV(service, proto, name)
+ }
+)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go18.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go18.go
new file mode 100644
index 00000000..b5a0f842
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/go18.go
@@ -0,0 +1,28 @@
+// +build go1.8
+
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package naming
+
+import "net"
+
+var (
+ lookupHost = net.DefaultResolver.LookupHost
+ lookupSRV = net.DefaultResolver.LookupSRV
+)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/naming.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/naming.go
index c2e0871e..1af7e32f 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/naming.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/naming/naming.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/peer/peer.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/peer/peer.go
index bfa6205b..317b8b9d 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/peer/peer.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/peer/peer.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -42,7 +27,8 @@ import (
"google.golang.org/grpc/credentials"
)
-// Peer contains the information of the peer for an RPC.
+// Peer contains the information of the peer for an RPC, such as the address
+// and authentication information.
type Peer struct {
// Addr is the peer address.
Addr net.Addr
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/proxy.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/proxy.go
new file mode 100644
index 00000000..2d40236e
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/proxy.go
@@ -0,0 +1,130 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package grpc
+
+import (
+ "bufio"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "net/http"
+ "net/http/httputil"
+ "net/url"
+
+ "golang.org/x/net/context"
+)
+
+var (
+ // errDisabled indicates that proxy is disabled for the address.
+ errDisabled = errors.New("proxy is disabled for the address")
+ // The following variable will be overwritten in the tests.
+ httpProxyFromEnvironment = http.ProxyFromEnvironment
+)
+
+func mapAddress(ctx context.Context, address string) (string, error) {
+ req := &http.Request{
+ URL: &url.URL{
+ Scheme: "https",
+ Host: address,
+ },
+ }
+ url, err := httpProxyFromEnvironment(req)
+ if err != nil {
+ return "", err
+ }
+ if url == nil {
+ return "", errDisabled
+ }
+ return url.Host, nil
+}
+
+// To read a response from a net.Conn, http.ReadResponse() takes a bufio.Reader.
+// It's possible that this reader reads more than what's need for the response and stores
+// those bytes in the buffer.
+// bufConn wraps the original net.Conn and the bufio.Reader to make sure we don't lose the
+// bytes in the buffer.
+type bufConn struct {
+ net.Conn
+ r io.Reader
+}
+
+func (c *bufConn) Read(b []byte) (int, error) {
+ return c.r.Read(b)
+}
+
+func doHTTPConnectHandshake(ctx context.Context, conn net.Conn, addr string) (_ net.Conn, err error) {
+ defer func() {
+ if err != nil {
+ conn.Close()
+ }
+ }()
+
+ req := (&http.Request{
+ Method: http.MethodConnect,
+ URL: &url.URL{Host: addr},
+ Header: map[string][]string{"User-Agent": {grpcUA}},
+ })
+
+ if err := sendHTTPRequest(ctx, req, conn); err != nil {
+ return nil, fmt.Errorf("failed to write the HTTP request: %v", err)
+ }
+
+ r := bufio.NewReader(conn)
+ resp, err := http.ReadResponse(r, req)
+ if err != nil {
+ return nil, fmt.Errorf("reading server HTTP response: %v", err)
+ }
+ defer resp.Body.Close()
+ if resp.StatusCode != http.StatusOK {
+ dump, err := httputil.DumpResponse(resp, true)
+ if err != nil {
+ return nil, fmt.Errorf("failed to do connect handshake, status code: %s", resp.Status)
+ }
+ return nil, fmt.Errorf("failed to do connect handshake, response: %q", dump)
+ }
+
+ return &bufConn{Conn: conn, r: r}, nil
+}
+
+// newProxyDialer returns a dialer that connects to proxy first if necessary.
+// The returned dialer checks if a proxy is necessary, dial to the proxy with the
+// provided dialer, does HTTP CONNECT handshake and returns the connection.
+func newProxyDialer(dialer func(context.Context, string) (net.Conn, error)) func(context.Context, string) (net.Conn, error) {
+ return func(ctx context.Context, addr string) (conn net.Conn, err error) {
+ var skipHandshake bool
+ newAddr, err := mapAddress(ctx, addr)
+ if err != nil {
+ if err != errDisabled {
+ return nil, err
+ }
+ skipHandshake = true
+ newAddr = addr
+ }
+
+ conn, err = dialer(ctx, newAddr)
+ if err != nil {
+ return
+ }
+ if !skipHandshake {
+ conn, err = doHTTPConnectHandshake(ctx, conn, addr)
+ }
+ return
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/rpc_util.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/rpc_util.go
index 2619d396..9b9d3883 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/rpc_util.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/rpc_util.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -37,47 +22,22 @@ import (
"bytes"
"compress/gzip"
"encoding/binary"
- "fmt"
"io"
"io/ioutil"
"math"
- "os"
+ "sync"
"time"
- "github.com/golang/protobuf/proto"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
+ "google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
+ "google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
-// Codec defines the interface gRPC uses to encode and decode messages.
-type Codec interface {
- // Marshal returns the wire format of v.
- Marshal(v interface{}) ([]byte, error)
- // Unmarshal parses the wire format into v.
- Unmarshal(data []byte, v interface{}) error
- // String returns the name of the Codec implementation. The returned
- // string will be used as part of content type in transmission.
- String() string
-}
-
-// protoCodec is a Codec implementation with protobuf. It is the default codec for gRPC.
-type protoCodec struct{}
-
-func (protoCodec) Marshal(v interface{}) ([]byte, error) {
- return proto.Marshal(v.(proto.Message))
-}
-
-func (protoCodec) Unmarshal(data []byte, v interface{}) error {
- return proto.Unmarshal(data, v.(proto.Message))
-}
-
-func (protoCodec) String() string {
- return "proto"
-}
-
// Compressor defines the interface gRPC uses to compress a message.
type Compressor interface {
// Do compresses p into w.
@@ -86,16 +46,24 @@ type Compressor interface {
Type() string
}
-// NewGZIPCompressor creates a Compressor based on GZIP.
-func NewGZIPCompressor() Compressor {
- return &gzipCompressor{}
+type gzipCompressor struct {
+ pool sync.Pool
}
-type gzipCompressor struct {
+// NewGZIPCompressor creates a Compressor based on GZIP.
+func NewGZIPCompressor() Compressor {
+ return &gzipCompressor{
+ pool: sync.Pool{
+ New: func() interface{} {
+ return gzip.NewWriter(ioutil.Discard)
+ },
+ },
+ }
}
func (c *gzipCompressor) Do(w io.Writer, p []byte) error {
- z := gzip.NewWriter(w)
+ z := c.pool.Get().(*gzip.Writer)
+ z.Reset(w)
if _, err := z.Write(p); err != nil {
return err
}
@@ -115,6 +83,7 @@ type Decompressor interface {
}
type gzipDecompressor struct {
+ pool sync.Pool
}
// NewGZIPDecompressor creates a Decompressor based on GZIP.
@@ -123,11 +92,26 @@ func NewGZIPDecompressor() Decompressor {
}
func (d *gzipDecompressor) Do(r io.Reader) ([]byte, error) {
- z, err := gzip.NewReader(r)
- if err != nil {
- return nil, err
+ var z *gzip.Reader
+ switch maybeZ := d.pool.Get().(type) {
+ case nil:
+ newZ, err := gzip.NewReader(r)
+ if err != nil {
+ return nil, err
+ }
+ z = newZ
+ case *gzip.Reader:
+ z = maybeZ
+ if err := z.Reset(r); err != nil {
+ d.pool.Put(z)
+ return nil, err
+ }
}
- defer z.Close()
+
+ defer func() {
+ z.Close()
+ d.pool.Put(z)
+ }()
return ioutil.ReadAll(z)
}
@@ -137,10 +121,14 @@ func (d *gzipDecompressor) Type() string {
// callInfo contains all related configuration and information about an RPC.
type callInfo struct {
- failFast bool
- headerMD metadata.MD
- trailerMD metadata.MD
- traceInfo traceInfo // in trace.go
+ failFast bool
+ headerMD metadata.MD
+ trailerMD metadata.MD
+ peer *peer.Peer
+ traceInfo traceInfo // in trace.go
+ maxReceiveMessageSize *int
+ maxSendMessageSize *int
+ creds credentials.PerRPCCredentials
}
var defaultCallInfo = callInfo{failFast: true}
@@ -157,6 +145,14 @@ type CallOption interface {
after(*callInfo)
}
+// EmptyCallOption does not alter the Call configuration.
+// It can be embedded in another structure to carry satellite data for use
+// by interceptors.
+type EmptyCallOption struct{}
+
+func (EmptyCallOption) before(*callInfo) error { return nil }
+func (EmptyCallOption) after(*callInfo) {}
+
type beforeCall func(c *callInfo) error
func (o beforeCall) before(c *callInfo) error { return o(c) }
@@ -183,12 +179,23 @@ func Trailer(md *metadata.MD) CallOption {
})
}
+// Peer returns a CallOption that retrieves peer information for a
+// unary RPC.
+func Peer(peer *peer.Peer) CallOption {
+ return afterCall(func(c *callInfo) {
+ if c.peer != nil {
+ *peer = *c.peer
+ }
+ })
+}
+
// FailFast configures the action to take when an RPC is attempted on broken
// connections or unreachable servers. If failfast is true, the RPC will fail
// immediately. Otherwise, the RPC client will block the call until a
// connection is available (or the call is canceled or times out) and will retry
// the call if it fails due to a transient error. Please refer to
-// https://github.com/grpc/grpc/blob/master/doc/fail_fast.md
+// https://github.com/grpc/grpc/blob/master/doc/wait-for-ready.md.
+// Note: failFast is default to true.
func FailFast(failFast bool) CallOption {
return beforeCall(func(c *callInfo) error {
c.failFast = failFast
@@ -196,6 +203,31 @@ func FailFast(failFast bool) CallOption {
})
}
+// MaxCallRecvMsgSize returns a CallOption which sets the maximum message size the client can receive.
+func MaxCallRecvMsgSize(s int) CallOption {
+ return beforeCall(func(o *callInfo) error {
+ o.maxReceiveMessageSize = &s
+ return nil
+ })
+}
+
+// MaxCallSendMsgSize returns a CallOption which sets the maximum message size the client can send.
+func MaxCallSendMsgSize(s int) CallOption {
+ return beforeCall(func(o *callInfo) error {
+ o.maxSendMessageSize = &s
+ return nil
+ })
+}
+
+// PerRPCCredentials returns a CallOption that sets credentials.PerRPCCredentials
+// for a call.
+func PerRPCCredentials(creds credentials.PerRPCCredentials) CallOption {
+ return beforeCall(func(c *callInfo) error {
+ c.creds = creds
+ return nil
+ })
+}
+
// The format of the payload: compressed or not?
type payloadFormat uint8
@@ -212,7 +244,7 @@ type parser struct {
r io.Reader
// The header of a gRPC message. Find more detail
- // at http://www.grpc.io/docs/guides/wire.html.
+ // at https://grpc.io/docs/guides/wire.html.
header [5]byte
}
@@ -229,8 +261,8 @@ type parser struct {
// No other error values or types must be returned, which also means
// that the underlying io.Reader must not return an incompatible
// error.
-func (p *parser) recvMsg(maxMsgSize int) (pf payloadFormat, msg []byte, err error) {
- if _, err := io.ReadFull(p.r, p.header[:]); err != nil {
+func (p *parser) recvMsg(maxReceiveMessageSize int) (pf payloadFormat, msg []byte, err error) {
+ if _, err := p.r.Read(p.header[:]); err != nil {
return 0, nil, err
}
@@ -240,13 +272,13 @@ func (p *parser) recvMsg(maxMsgSize int) (pf payloadFormat, msg []byte, err erro
if length == 0 {
return pf, nil, nil
}
- if length > uint32(maxMsgSize) {
- return 0, nil, Errorf(codes.Internal, "grpc: received message length %d exceeding the max size %d", length, maxMsgSize)
+ if length > uint32(maxReceiveMessageSize) {
+ return 0, nil, Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", length, maxReceiveMessageSize)
}
// TODO(bradfitz,zhaoq): garbage. reuse buffer after proto decoding instead
// of making it for each message:
msg = make([]byte, int(length))
- if _, err := io.ReadFull(p.r, msg); err != nil {
+ if _, err := p.r.Read(msg); err != nil {
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
@@ -267,7 +299,7 @@ func encode(c Codec, msg interface{}, cp Compressor, cbuf *bytes.Buffer, outPayl
// TODO(zhaoq): optimize to reduce memory alloc and copying.
b, err = c.Marshal(msg)
if err != nil {
- return nil, err
+ return nil, Errorf(codes.Internal, "grpc: error while marshaling: %v", err.Error())
}
if outPayload != nil {
outPayload.Payload = msg
@@ -277,14 +309,14 @@ func encode(c Codec, msg interface{}, cp Compressor, cbuf *bytes.Buffer, outPayl
}
if cp != nil {
if err := cp.Do(cbuf, b); err != nil {
- return nil, err
+ return nil, Errorf(codes.Internal, "grpc: error while compressing: %v", err.Error())
}
b = cbuf.Bytes()
}
length = uint(len(b))
}
if length > math.MaxUint32 {
- return nil, Errorf(codes.InvalidArgument, "grpc: message too large (%d bytes)", length)
+ return nil, Errorf(codes.ResourceExhausted, "grpc: message too large (%d bytes)", length)
}
const (
@@ -325,8 +357,8 @@ func checkRecvPayload(pf payloadFormat, recvCompress string, dc Decompressor) er
return nil
}
-func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{}, maxMsgSize int, inPayload *stats.InPayload) error {
- pf, d, err := p.recvMsg(maxMsgSize)
+func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{}, maxReceiveMessageSize int, inPayload *stats.InPayload) error {
+ pf, d, err := p.recvMsg(maxReceiveMessageSize)
if err != nil {
return err
}
@@ -342,10 +374,10 @@ func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{
return Errorf(codes.Internal, "grpc: failed to decompress the received message %v", err)
}
}
- if len(d) > maxMsgSize {
+ if len(d) > maxReceiveMessageSize {
// TODO: Revisit the error code. Currently keep it consistent with java
// implementation.
- return Errorf(codes.Internal, "grpc: received a message of %d bytes exceeding %d limit", len(d), maxMsgSize)
+ return Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(d), maxReceiveMessageSize)
}
if err := c.Unmarshal(d, m); err != nil {
return Errorf(codes.Internal, "grpc: failed to unmarshal the received message %v", err)
@@ -360,116 +392,57 @@ func recv(p *parser, c Codec, s *transport.Stream, dc Decompressor, m interface{
return nil
}
-// rpcError defines the status from an RPC.
-type rpcError struct {
- code codes.Code
- desc string
+type rpcInfo struct {
+ bytesSent bool
+ bytesReceived bool
+}
+
+type rpcInfoContextKey struct{}
+
+func newContextWithRPCInfo(ctx context.Context) context.Context {
+ return context.WithValue(ctx, rpcInfoContextKey{}, &rpcInfo{})
}
-func (e *rpcError) Error() string {
- return fmt.Sprintf("rpc error: code = %d desc = %s", e.code, e.desc)
+func rpcInfoFromContext(ctx context.Context) (s *rpcInfo, ok bool) {
+ s, ok = ctx.Value(rpcInfoContextKey{}).(*rpcInfo)
+ return
+}
+
+func updateRPCInfoInContext(ctx context.Context, s rpcInfo) {
+ if ss, ok := rpcInfoFromContext(ctx); ok {
+ *ss = s
+ }
+ return
}
// Code returns the error code for err if it was produced by the rpc system.
// Otherwise, it returns codes.Unknown.
+//
+// Deprecated; use status.FromError and Code method instead.
func Code(err error) codes.Code {
- if err == nil {
- return codes.OK
- }
- if e, ok := err.(*rpcError); ok {
- return e.code
+ if s, ok := status.FromError(err); ok {
+ return s.Code()
}
return codes.Unknown
}
// ErrorDesc returns the error description of err if it was produced by the rpc system.
// Otherwise, it returns err.Error() or empty string when err is nil.
+//
+// Deprecated; use status.FromError and Message method instead.
func ErrorDesc(err error) string {
- if err == nil {
- return ""
- }
- if e, ok := err.(*rpcError); ok {
- return e.desc
+ if s, ok := status.FromError(err); ok {
+ return s.Message()
}
return err.Error()
}
// Errorf returns an error containing an error code and a description;
// Errorf returns nil if c is OK.
+//
+// Deprecated; use status.Errorf instead.
func Errorf(c codes.Code, format string, a ...interface{}) error {
- if c == codes.OK {
- return nil
- }
- return &rpcError{
- code: c,
- desc: fmt.Sprintf(format, a...),
- }
-}
-
-// toRPCErr converts an error into a rpcError.
-func toRPCErr(err error) error {
- switch e := err.(type) {
- case *rpcError:
- return err
- case transport.StreamError:
- return &rpcError{
- code: e.Code,
- desc: e.Desc,
- }
- case transport.ConnectionError:
- return &rpcError{
- code: codes.Internal,
- desc: e.Desc,
- }
- default:
- switch err {
- case context.DeadlineExceeded:
- return &rpcError{
- code: codes.DeadlineExceeded,
- desc: err.Error(),
- }
- case context.Canceled:
- return &rpcError{
- code: codes.Canceled,
- desc: err.Error(),
- }
- case ErrClientConnClosing:
- return &rpcError{
- code: codes.FailedPrecondition,
- desc: err.Error(),
- }
- }
-
- }
- return Errorf(codes.Unknown, "%v", err)
-}
-
-// convertCode converts a standard Go error into its canonical code. Note that
-// this is only used to translate the error returned by the server applications.
-func convertCode(err error) codes.Code {
- switch err {
- case nil:
- return codes.OK
- case io.EOF:
- return codes.OutOfRange
- case io.ErrClosedPipe, io.ErrNoProgress, io.ErrShortBuffer, io.ErrShortWrite, io.ErrUnexpectedEOF:
- return codes.FailedPrecondition
- case os.ErrInvalid:
- return codes.InvalidArgument
- case context.Canceled:
- return codes.Canceled
- case context.DeadlineExceeded:
- return codes.DeadlineExceeded
- }
- switch {
- case os.IsExist(err):
- return codes.AlreadyExists
- case os.IsNotExist(err):
- return codes.NotFound
- case os.IsPermission(err):
- return codes.PermissionDenied
- }
- return codes.Unknown
+ return status.Errorf(c, format, a...)
}
// MethodConfig defines the configuration recommended by the service providers for a
@@ -479,24 +452,22 @@ type MethodConfig struct {
// WaitForReady indicates whether RPCs sent to this method should wait until
// the connection is ready by default (!failfast). The value specified via the
// gRPC client API will override the value set here.
- WaitForReady bool
+ WaitForReady *bool
// Timeout is the default timeout for RPCs sent to this method. The actual
// deadline used will be the minimum of the value specified here and the value
// set by the application via the gRPC client API. If either one is not set,
// then the other will be used. If neither is set, then the RPC has no deadline.
- Timeout time.Duration
+ Timeout *time.Duration
// MaxReqSize is the maximum allowed payload size for an individual request in a
- // stream (client->server) in bytes. The size which is measured is the serialized,
- // uncompressed payload in bytes. The actual value used is the minumum of the value
- // specified here and the value set by the application via the gRPC client API. If
- // either one is not set, then the other will be used. If neither is set, then the
- // built-in default is used.
- // TODO: support this.
- MaxReqSize uint64
+ // stream (client->server) in bytes. The size which is measured is the serialized
+ // payload after per-message compression (but before stream compression) in bytes.
+ // The actual value used is the minumum of the value specified here and the value set
+ // by the application via the gRPC client API. If either one is not set, then the other
+ // will be used. If neither is set, then the built-in default is used.
+ MaxReqSize *int
// MaxRespSize is the maximum allowed payload size for an individual response in a
// stream (server->client) in bytes.
- // TODO: support this.
- MaxRespSize uint64
+ MaxRespSize *int
}
// ServiceConfig is provided by the service provider and contains parameters for how
@@ -507,9 +478,38 @@ type ServiceConfig struct {
// via grpc.WithBalancer will override this.
LB Balancer
// Methods contains a map for the methods in this service.
+ // If there is an exact match for a method (i.e. /service/method) in the map, use the corresponding MethodConfig.
+ // If there's no exact match, look for the default config for the service (/service/) and use the corresponding MethodConfig if it exists.
+ // Otherwise, the method has no MethodConfig to use.
Methods map[string]MethodConfig
}
+func min(a, b *int) *int {
+ if *a < *b {
+ return a
+ }
+ return b
+}
+
+func getMaxSize(mcMax, doptMax *int, defaultVal int) *int {
+ if mcMax == nil && doptMax == nil {
+ return &defaultVal
+ }
+ if mcMax != nil && doptMax != nil {
+ return min(mcMax, doptMax)
+ }
+ if mcMax != nil {
+ return mcMax
+ }
+ return doptMax
+}
+
+// SupportPackageIsVersion3 is referenced from generated protocol buffer files.
+// The latest support package version is 4.
+// SupportPackageIsVersion3 is kept for compability. It will be removed in the
+// next support package version update.
+const SupportPackageIsVersion3 = true
+
// SupportPackageIsVersion4 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the grpc package.
//
@@ -517,3 +517,8 @@ type ServiceConfig struct {
// requires a synchronised update of grpc-go and protoc-gen-go. This constant
// should not be referenced from any other code.
const SupportPackageIsVersion4 = true
+
+// Version is the current grpc version.
+const Version = "1.6.0-dev"
+
+const grpcUA = "grpc-go/" + Version
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/server.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/server.go
index 985226d6..42733e22 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/server.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -53,12 +38,19 @@ import (
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/internal"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
"google.golang.org/grpc/transport"
)
+const (
+ defaultServerMaxReceiveMessageSize = 1024 * 1024 * 4
+ defaultServerMaxSendMessageSize = 1024 * 1024 * 4
+)
+
type methodHandler func(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor UnaryServerInterceptor) (interface{}, error)
// MethodDesc represents an RPC service's method specification.
@@ -94,6 +86,7 @@ type Server struct {
mu sync.Mutex // guards following
lis map[net.Listener]bool
conns map[io.Closer]bool
+ serve bool
drain bool
ctx context.Context
cancel context.CancelFunc
@@ -105,24 +98,63 @@ type Server struct {
}
type options struct {
- creds credentials.TransportCredentials
- codec Codec
- cp Compressor
- dc Decompressor
- maxMsgSize int
- unaryInt UnaryServerInterceptor
- streamInt StreamServerInterceptor
- inTapHandle tap.ServerInHandle
- statsHandler stats.Handler
- maxConcurrentStreams uint32
- useHandlerImpl bool // use http.Handler-based server
+ creds credentials.TransportCredentials
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ unaryInt UnaryServerInterceptor
+ streamInt StreamServerInterceptor
+ inTapHandle tap.ServerInHandle
+ statsHandler stats.Handler
+ maxConcurrentStreams uint32
+ maxReceiveMessageSize int
+ maxSendMessageSize int
+ useHandlerImpl bool // use http.Handler-based server
+ unknownStreamDesc *StreamDesc
+ keepaliveParams keepalive.ServerParameters
+ keepalivePolicy keepalive.EnforcementPolicy
+ initialWindowSize int32
+ initialConnWindowSize int32
}
-var defaultMaxMsgSize = 1024 * 1024 * 4 // use 4MB as the default message size limit
+var defaultServerOptions = options{
+ maxReceiveMessageSize: defaultServerMaxReceiveMessageSize,
+ maxSendMessageSize: defaultServerMaxSendMessageSize,
+}
-// A ServerOption sets options.
+// A ServerOption sets options such as credentials, codec and keepalive parameters, etc.
type ServerOption func(*options)
+// InitialWindowSize returns a ServerOption that sets window size for stream.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func InitialWindowSize(s int32) ServerOption {
+ return func(o *options) {
+ o.initialWindowSize = s
+ }
+}
+
+// InitialConnWindowSize returns a ServerOption that sets window size for a connection.
+// The lower bound for window size is 64K and any value smaller than that will be ignored.
+func InitialConnWindowSize(s int32) ServerOption {
+ return func(o *options) {
+ o.initialConnWindowSize = s
+ }
+}
+
+// KeepaliveParams returns a ServerOption that sets keepalive and max-age parameters for the server.
+func KeepaliveParams(kp keepalive.ServerParameters) ServerOption {
+ return func(o *options) {
+ o.keepaliveParams = kp
+ }
+}
+
+// KeepaliveEnforcementPolicy returns a ServerOption that sets keepalive enforcement policy for the server.
+func KeepaliveEnforcementPolicy(kep keepalive.EnforcementPolicy) ServerOption {
+ return func(o *options) {
+ o.keepalivePolicy = kep
+ }
+}
+
// CustomCodec returns a ServerOption that sets a codec for message marshaling and unmarshaling.
func CustomCodec(codec Codec) ServerOption {
return func(o *options) {
@@ -144,11 +176,25 @@ func RPCDecompressor(dc Decompressor) ServerOption {
}
}
-// MaxMsgSize returns a ServerOption to set the max message size in bytes for inbound mesages.
-// If this is not set, gRPC uses the default 4MB.
+// MaxMsgSize returns a ServerOption to set the max message size in bytes the server can receive.
+// If this is not set, gRPC uses the default limit. Deprecated: use MaxRecvMsgSize instead.
func MaxMsgSize(m int) ServerOption {
+ return MaxRecvMsgSize(m)
+}
+
+// MaxRecvMsgSize returns a ServerOption to set the max message size in bytes the server can receive.
+// If this is not set, gRPC uses the default 4MB.
+func MaxRecvMsgSize(m int) ServerOption {
return func(o *options) {
- o.maxMsgSize = m
+ o.maxReceiveMessageSize = m
+ }
+}
+
+// MaxSendMsgSize returns a ServerOption to set the max message size in bytes the server can send.
+// If this is not set, gRPC uses the default 4MB.
+func MaxSendMsgSize(m int) ServerOption {
+ return func(o *options) {
+ o.maxSendMessageSize = m
}
}
@@ -173,7 +219,7 @@ func Creds(c credentials.TransportCredentials) ServerOption {
func UnaryInterceptor(i UnaryServerInterceptor) ServerOption {
return func(o *options) {
if o.unaryInt != nil {
- panic("The unary server interceptor has been set.")
+ panic("The unary server interceptor was already set and may not be reset.")
}
o.unaryInt = i
}
@@ -184,7 +230,7 @@ func UnaryInterceptor(i UnaryServerInterceptor) ServerOption {
func StreamInterceptor(i StreamServerInterceptor) ServerOption {
return func(o *options) {
if o.streamInt != nil {
- panic("The stream server interceptor has been set.")
+ panic("The stream server interceptor was already set and may not be reset.")
}
o.streamInt = i
}
@@ -195,7 +241,7 @@ func StreamInterceptor(i StreamServerInterceptor) ServerOption {
func InTapHandle(h tap.ServerInHandle) ServerOption {
return func(o *options) {
if o.inTapHandle != nil {
- panic("The tap handle has been set.")
+ panic("The tap handle was already set and may not be reset.")
}
o.inTapHandle = h
}
@@ -208,11 +254,28 @@ func StatsHandler(h stats.Handler) ServerOption {
}
}
+// UnknownServiceHandler returns a ServerOption that allows for adding a custom
+// unknown service handler. The provided method is a bidi-streaming RPC service
+// handler that will be invoked instead of returning the "unimplemented" gRPC
+// error whenever a request is received for an unregistered service or method.
+// The handling function has full access to the Context of the request and the
+// stream, and the invocation passes through interceptors.
+func UnknownServiceHandler(streamHandler StreamHandler) ServerOption {
+ return func(o *options) {
+ o.unknownStreamDesc = &StreamDesc{
+ StreamName: "unknown_service_handler",
+ Handler: streamHandler,
+ // We need to assume that the users of the streamHandler will want to use both.
+ ClientStreams: true,
+ ServerStreams: true,
+ }
+ }
+}
+
// NewServer creates a gRPC server which has no service registered and has not
// started to accept requests yet.
func NewServer(opt ...ServerOption) *Server {
- var opts options
- opts.maxMsgSize = defaultMaxMsgSize
+ opts := defaultServerOptions
for _, o := range opt {
o(&opts)
}
@@ -251,8 +314,8 @@ func (s *Server) errorf(format string, a ...interface{}) {
}
}
-// RegisterService register a service and its implementation to the gRPC
-// server. Called from the IDL generated code. This must be called before
+// RegisterService registers a service and its implementation to the gRPC
+// server. It is called from the IDL generated code. This must be called before
// invoking Serve.
func (s *Server) RegisterService(sd *ServiceDesc, ss interface{}) {
ht := reflect.TypeOf(sd.HandlerType).Elem()
@@ -267,6 +330,9 @@ func (s *Server) register(sd *ServiceDesc, ss interface{}) {
s.mu.Lock()
defer s.mu.Unlock()
s.printf("RegisterService(%q)", sd.ServiceName)
+ if s.serve {
+ grpclog.Fatalf("grpc: Server.RegisterService after Server.Serve for %q", sd.ServiceName)
+ }
if _, ok := s.m[sd.ServiceName]; ok {
grpclog.Fatalf("grpc: Server.RegisterService found duplicate service registration for %q", sd.ServiceName)
}
@@ -297,7 +363,7 @@ type MethodInfo struct {
IsServerStream bool
}
-// ServiceInfo contains unary RPC method info, streaming RPC methid info and metadata for a service.
+// ServiceInfo contains unary RPC method info, streaming RPC method info and metadata for a service.
type ServiceInfo struct {
Methods []MethodInfo
// Metadata is the metadata specified in ServiceDesc when registering service.
@@ -355,6 +421,7 @@ func (s *Server) useTransportAuthenticator(rawConn net.Conn) (net.Conn, credenti
func (s *Server) Serve(lis net.Listener) error {
s.mu.Lock()
s.printf("serving")
+ s.serve = true
if s.lis == nil {
s.mu.Unlock()
lis.Close()
@@ -390,10 +457,12 @@ func (s *Server) Serve(lis net.Listener) error {
s.mu.Lock()
s.printf("Accept error: %v; retrying in %v", err, tempDelay)
s.mu.Unlock()
+ timer := time.NewTimer(tempDelay)
select {
- case <-time.After(tempDelay):
+ case <-timer.C:
case <-s.ctx.Done():
}
+ timer.Stop()
continue
}
s.mu.Lock()
@@ -416,7 +485,7 @@ func (s *Server) handleRawConn(rawConn net.Conn) {
s.mu.Lock()
s.errorf("ServerHandshake(%q) failed: %v", rawConn.RemoteAddr(), err)
s.mu.Unlock()
- grpclog.Printf("grpc: Server.Serve failed to complete security handshake from %q: %v", rawConn.RemoteAddr(), err)
+ grpclog.Warningf("grpc: Server.Serve failed to complete security handshake from %q: %v", rawConn.RemoteAddr(), err)
// If serverHandShake returns ErrConnDispatched, keep rawConn open.
if err != credentials.ErrConnDispatched {
rawConn.Close()
@@ -446,10 +515,14 @@ func (s *Server) handleRawConn(rawConn net.Conn) {
// transport.NewServerTransport).
func (s *Server) serveHTTP2Transport(c net.Conn, authInfo credentials.AuthInfo) {
config := &transport.ServerConfig{
- MaxStreams: s.opts.maxConcurrentStreams,
- AuthInfo: authInfo,
- InTapHandle: s.opts.inTapHandle,
- StatsHandler: s.opts.statsHandler,
+ MaxStreams: s.opts.maxConcurrentStreams,
+ AuthInfo: authInfo,
+ InTapHandle: s.opts.inTapHandle,
+ StatsHandler: s.opts.statsHandler,
+ KeepaliveParams: s.opts.keepaliveParams,
+ KeepalivePolicy: s.opts.keepalivePolicy,
+ InitialWindowSize: s.opts.initialWindowSize,
+ InitialConnWindowSize: s.opts.initialConnWindowSize,
}
st, err := transport.NewServerTransport("http2", c, config)
if err != nil {
@@ -457,7 +530,7 @@ func (s *Server) serveHTTP2Transport(c net.Conn, authInfo credentials.AuthInfo)
s.errorf("NewServerTransport(%q) failed: %v", c.RemoteAddr(), err)
s.mu.Unlock()
c.Close()
- grpclog.Println("grpc: Server.Serve failed to create ServerTransport: ", err)
+ grpclog.Warningln("grpc: Server.Serve failed to create ServerTransport: ", err)
return
}
if !s.addConn(st) {
@@ -515,6 +588,30 @@ func (s *Server) serveUsingHandler(conn net.Conn) {
})
}
+// ServeHTTP implements the Go standard library's http.Handler
+// interface by responding to the gRPC request r, by looking up
+// the requested gRPC method in the gRPC server s.
+//
+// The provided HTTP request must have arrived on an HTTP/2
+// connection. When using the Go standard library's server,
+// practically this means that the Request must also have arrived
+// over TLS.
+//
+// To share one port (such as 443 for https) between gRPC and an
+// existing http.Handler, use a root http.Handler such as:
+//
+// if r.ProtoMajor == 2 && strings.HasPrefix(
+// r.Header.Get("Content-Type"), "application/grpc") {
+// grpcServer.ServeHTTP(w, r)
+// } else {
+// yourMux.ServeHTTP(w, r)
+// }
+//
+// Note that ServeHTTP uses Go's HTTP/2 server implementation which is totally
+// separate from grpc-go's HTTP/2 server. Performance and features may vary
+// between the two paths. ServeHTTP does not support some gRPC features
+// available through grpc-go's HTTP/2 server, and it is currently EXPERIMENTAL
+// and subject to change.
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
st, err := transport.NewServerHandlerTransport(w, r)
if err != nil {
@@ -581,14 +678,11 @@ func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Str
}
p, err := encode(s.opts.codec, msg, cp, cbuf, outPayload)
if err != nil {
- // This typically indicates a fatal issue (e.g., memory
- // corruption or hardware faults) the application program
- // cannot handle.
- //
- // TODO(zhaoq): There exist other options also such as only closing the
- // faulty stream locally and remotely (Other streams can keep going). Find
- // the optimal option.
- grpclog.Fatalf("grpc: Server failed to encode response %v", err)
+ grpclog.Errorln("grpc: server failed to encode response: ", err)
+ return err
+ }
+ if len(p) > s.opts.maxSendMessageSize {
+ return status.Errorf(codes.ResourceExhausted, "grpc: trying to send message larger than max (%d vs. %d)", len(p), s.opts.maxSendMessageSize)
}
err = t.Write(stream, p, opts)
if err == nil && outPayload != nil {
@@ -605,9 +699,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
BeginTime: time.Now(),
}
sh.HandleRPC(stream.Context(), begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
EndTime: time.Now(),
}
@@ -615,8 +707,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
end.Error = toRPCErr(err)
}
sh.HandleRPC(stream.Context(), end)
- }
- }()
+ }()
+ }
if trInfo != nil {
defer trInfo.tr.Finish()
trInfo.firstLine.client = false
@@ -633,136 +725,137 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.
stream.SetSendCompress(s.opts.cp.Type())
}
p := &parser{r: stream}
- for {
- pf, req, err := p.recvMsg(s.opts.maxMsgSize)
- if err == io.EOF {
- // The entire stream is done (for unary RPC only).
- return err
- }
- if err == io.ErrUnexpectedEOF {
- err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error())
- }
- if err != nil {
- switch err := err.(type) {
- case *rpcError:
- if e := t.WriteStatus(stream, err.code, err.desc); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
+ pf, req, err := p.recvMsg(s.opts.maxReceiveMessageSize)
+ if err == io.EOF {
+ // The entire stream is done (for unary RPC only).
+ return err
+ }
+ if err == io.ErrUnexpectedEOF {
+ err = Errorf(codes.Internal, io.ErrUnexpectedEOF.Error())
+ }
+ if err != nil {
+ if st, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, st); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e)
+ }
+ } else {
+ switch st := err.(type) {
case transport.ConnectionError:
// Nothing to do here.
case transport.StreamError:
- if e := t.WriteStatus(stream, err.Code, err.Desc); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
+ if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
default:
- panic(fmt.Sprintf("grpc: Unexpected error (%T) from recvMsg: %v", err, err))
+ panic(fmt.Sprintf("grpc: Unexpected error (%T) from recvMsg: %v", st, st))
}
- return err
}
+ return err
+ }
- if err := checkRecvPayload(pf, stream.RecvCompress(), s.opts.dc); err != nil {
- switch err := err.(type) {
- case *rpcError:
- if e := t.WriteStatus(stream, err.code, err.desc); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
- return err
- default:
- if e := t.WriteStatus(stream, codes.Internal, err.Error()); e != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", e)
- }
- // TODO checkRecvPayload always return RPC error. Add a return here if necessary.
+ if err := checkRecvPayload(pf, stream.RecvCompress(), s.opts.dc); err != nil {
+ if st, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, st); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
+ return err
}
- var inPayload *stats.InPayload
- if sh != nil {
- inPayload = &stats.InPayload{
- RecvTime: time.Now(),
- }
+ if e := t.WriteStatus(stream, status.New(codes.Internal, err.Error())); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e)
}
- statusCode := codes.OK
- statusDesc := ""
- df := func(v interface{}) error {
- if inPayload != nil {
- inPayload.WireLength = len(req)
- }
- if pf == compressionMade {
- var err error
- req, err = s.opts.dc.Do(bytes.NewReader(req))
- if err != nil {
- if err := t.WriteStatus(stream, codes.Internal, err.Error()); err != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status %v", err)
- }
- return Errorf(codes.Internal, err.Error())
- }
- }
- if len(req) > s.opts.maxMsgSize {
- // TODO: Revisit the error code. Currently keep it consistent with
- // java implementation.
- statusCode = codes.Internal
- statusDesc = fmt.Sprintf("grpc: server received a message of %d bytes exceeding %d limit", len(req), s.opts.maxMsgSize)
- }
- if err := s.opts.codec.Unmarshal(req, v); err != nil {
- return err
- }
- if inPayload != nil {
- inPayload.Payload = v
- inPayload.Data = req
- inPayload.Length = len(req)
- sh.HandleRPC(stream.Context(), inPayload)
- }
- if trInfo != nil {
- trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true)
- }
- return nil
+
+ // TODO checkRecvPayload always return RPC error. Add a return here if necessary.
+ }
+ var inPayload *stats.InPayload
+ if sh != nil {
+ inPayload = &stats.InPayload{
+ RecvTime: time.Now(),
}
- reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt)
- if appErr != nil {
- if err, ok := appErr.(*rpcError); ok {
- statusCode = err.code
- statusDesc = err.desc
- } else {
- statusCode = convertCode(appErr)
- statusDesc = appErr.Error()
- }
- if trInfo != nil && statusCode != codes.OK {
- trInfo.tr.LazyLog(stringer(statusDesc), true)
- trInfo.tr.SetError()
- }
- if err := t.WriteStatus(stream, statusCode, statusDesc); err != nil {
- grpclog.Printf("grpc: Server.processUnaryRPC failed to write status: %v", err)
+ }
+ df := func(v interface{}) error {
+ if inPayload != nil {
+ inPayload.WireLength = len(req)
+ }
+ if pf == compressionMade {
+ var err error
+ req, err = s.opts.dc.Do(bytes.NewReader(req))
+ if err != nil {
+ return Errorf(codes.Internal, err.Error())
}
- return Errorf(statusCode, statusDesc)
+ }
+ if len(req) > s.opts.maxReceiveMessageSize {
+ // TODO: Revisit the error code. Currently keep it consistent with
+ // java implementation.
+ return status.Errorf(codes.ResourceExhausted, "grpc: received message larger than max (%d vs. %d)", len(req), s.opts.maxReceiveMessageSize)
+ }
+ if err := s.opts.codec.Unmarshal(req, v); err != nil {
+ return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err)
+ }
+ if inPayload != nil {
+ inPayload.Payload = v
+ inPayload.Data = req
+ inPayload.Length = len(req)
+ sh.HandleRPC(stream.Context(), inPayload)
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(&payload{sent: false, msg: v}, true)
+ }
+ return nil
+ }
+ reply, appErr := md.Handler(srv.server, stream.Context(), df, s.opts.unaryInt)
+ if appErr != nil {
+ appStatus, ok := status.FromError(appErr)
+ if !ok {
+ // Convert appErr if it is not a grpc status error.
+ appErr = status.Error(convertCode(appErr), appErr.Error())
+ appStatus, _ = status.FromError(appErr)
}
if trInfo != nil {
- trInfo.tr.LazyLog(stringer("OK"), false)
+ trInfo.tr.LazyLog(stringer(appStatus.Message()), true)
+ trInfo.tr.SetError()
}
- opts := &transport.Options{
- Last: true,
- Delay: false,
+ if e := t.WriteStatus(stream, appStatus); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e)
+ }
+ return appErr
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(stringer("OK"), false)
+ }
+ opts := &transport.Options{
+ Last: true,
+ Delay: false,
+ }
+ if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil {
+ if err == io.EOF {
+ // The entire stream is done (for unary RPC only).
+ return err
}
- if err := s.sendResponse(t, stream, reply, s.opts.cp, opts); err != nil {
- switch err := err.(type) {
+ if s, ok := status.FromError(err); ok {
+ if e := t.WriteStatus(stream, s); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status: %v", e)
+ }
+ } else {
+ switch st := err.(type) {
case transport.ConnectionError:
// Nothing to do here.
case transport.StreamError:
- statusCode = err.Code
- statusDesc = err.Desc
+ if e := t.WriteStatus(stream, status.New(st.Code, st.Desc)); e != nil {
+ grpclog.Warningf("grpc: Server.processUnaryRPC failed to write status %v", e)
+ }
default:
- statusCode = codes.Unknown
- statusDesc = err.Error()
+ panic(fmt.Sprintf("grpc: Unexpected error (%T) from sendResponse: %v", st, st))
}
- return err
}
- if trInfo != nil {
- trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true)
- }
- errWrite := t.WriteStatus(stream, statusCode, statusDesc)
- if statusCode != codes.OK {
- return Errorf(statusCode, statusDesc)
- }
- return errWrite
+ return err
+ }
+ if trInfo != nil {
+ trInfo.tr.LazyLog(&payload{sent: true, msg: reply}, true)
}
+ // TODO: Should we be logging if writing status failed here, like above?
+ // Should the logging be in WriteStatus? Should we ignore the WriteStatus
+ // error or allow the stats handler to see it?
+ return t.WriteStatus(stream, status.New(codes.OK, ""))
}
func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transport.Stream, srv *service, sd *StreamDesc, trInfo *traceInfo) (err error) {
@@ -772,9 +865,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
BeginTime: time.Now(),
}
sh.HandleRPC(stream.Context(), begin)
- }
- defer func() {
- if sh != nil {
+ defer func() {
end := &stats.End{
EndTime: time.Now(),
}
@@ -782,21 +873,22 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
end.Error = toRPCErr(err)
}
sh.HandleRPC(stream.Context(), end)
- }
- }()
+ }()
+ }
if s.opts.cp != nil {
stream.SetSendCompress(s.opts.cp.Type())
}
ss := &serverStream{
- t: t,
- s: stream,
- p: &parser{r: stream},
- codec: s.opts.codec,
- cp: s.opts.cp,
- dc: s.opts.dc,
- maxMsgSize: s.opts.maxMsgSize,
- trInfo: trInfo,
- statsHandler: sh,
+ t: t,
+ s: stream,
+ p: &parser{r: stream},
+ codec: s.opts.codec,
+ cp: s.opts.cp,
+ dc: s.opts.dc,
+ maxReceiveMessageSize: s.opts.maxReceiveMessageSize,
+ maxSendMessageSize: s.opts.maxSendMessageSize,
+ trInfo: trInfo,
+ statsHandler: sh,
}
if ss.cp != nil {
ss.cbuf = new(bytes.Buffer)
@@ -815,43 +907,47 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp
}()
}
var appErr error
+ var server interface{}
+ if srv != nil {
+ server = srv.server
+ }
if s.opts.streamInt == nil {
- appErr = sd.Handler(srv.server, ss)
+ appErr = sd.Handler(server, ss)
} else {
info := &StreamServerInfo{
FullMethod: stream.Method(),
IsClientStream: sd.ClientStreams,
IsServerStream: sd.ServerStreams,
}
- appErr = s.opts.streamInt(srv.server, ss, info, sd.Handler)
+ appErr = s.opts.streamInt(server, ss, info, sd.Handler)
}
if appErr != nil {
- if err, ok := appErr.(*rpcError); ok {
- ss.statusCode = err.code
- ss.statusDesc = err.desc
- } else if err, ok := appErr.(transport.StreamError); ok {
- ss.statusCode = err.Code
- ss.statusDesc = err.Desc
- } else {
- ss.statusCode = convertCode(appErr)
- ss.statusDesc = appErr.Error()
+ appStatus, ok := status.FromError(appErr)
+ if !ok {
+ switch err := appErr.(type) {
+ case transport.StreamError:
+ appStatus = status.New(err.Code, err.Desc)
+ default:
+ appStatus = status.New(convertCode(appErr), appErr.Error())
+ }
+ appErr = appStatus.Err()
+ }
+ if trInfo != nil {
+ ss.mu.Lock()
+ ss.trInfo.tr.LazyLog(stringer(appStatus.Message()), true)
+ ss.trInfo.tr.SetError()
+ ss.mu.Unlock()
}
+ t.WriteStatus(ss.s, appStatus)
+ // TODO: Should we log an error from WriteStatus here and below?
+ return appErr
}
if trInfo != nil {
ss.mu.Lock()
- if ss.statusCode != codes.OK {
- ss.trInfo.tr.LazyLog(stringer(ss.statusDesc), true)
- ss.trInfo.tr.SetError()
- } else {
- ss.trInfo.tr.LazyLog(stringer("OK"), false)
- }
+ ss.trInfo.tr.LazyLog(stringer("OK"), false)
ss.mu.Unlock()
}
- errWrite := t.WriteStatus(ss.s, ss.statusCode, ss.statusDesc)
- if ss.statusCode != codes.OK {
- return Errorf(ss.statusCode, ss.statusDesc)
- }
- return errWrite
+ return t.WriteStatus(ss.s, status.New(codes.OK, ""))
}
@@ -867,12 +963,12 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
trInfo.tr.SetError()
}
errDesc := fmt.Sprintf("malformed method name: %q", stream.Method())
- if err := t.WriteStatus(stream, codes.InvalidArgument, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.ResourceExhausted, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
}
- grpclog.Printf("grpc: Server.handleStream failed to write status: %v", err)
+ grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err)
}
if trInfo != nil {
trInfo.tr.Finish()
@@ -883,17 +979,21 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
method := sm[pos+1:]
srv, ok := s.m[service]
if !ok {
+ if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil {
+ s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo)
+ return
+ }
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"Unknown service %v", []interface{}{service}}, true)
trInfo.tr.SetError()
}
errDesc := fmt.Sprintf("unknown service %v", service)
- if err := t.WriteStatus(stream, codes.Unimplemented, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
}
- grpclog.Printf("grpc: Server.handleStream failed to write status: %v", err)
+ grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err)
}
if trInfo != nil {
trInfo.tr.Finish()
@@ -913,13 +1013,17 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str
trInfo.tr.LazyLog(&fmtStringer{"Unknown method %v", []interface{}{method}}, true)
trInfo.tr.SetError()
}
+ if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil {
+ s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo)
+ return
+ }
errDesc := fmt.Sprintf("unknown method %v", method)
- if err := t.WriteStatus(stream, codes.Unimplemented, errDesc); err != nil {
+ if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil {
if trInfo != nil {
trInfo.tr.LazyLog(&fmtStringer{"%v", []interface{}{err}}, true)
trInfo.tr.SetError()
}
- grpclog.Printf("grpc: Server.handleStream failed to write status: %v", err)
+ grpclog.Warningf("grpc: Server.handleStream failed to write status: %v", err)
}
if trInfo != nil {
trInfo.tr.Finish()
@@ -957,8 +1061,9 @@ func (s *Server) Stop() {
s.mu.Unlock()
}
-// GracefulStop stops the gRPC server gracefully. It stops the server to accept new
-// connections and RPCs and blocks until all the pending RPCs are finished.
+// GracefulStop stops the gRPC server gracefully. It stops the server from
+// accepting new connections and RPCs and blocks until all the pending RPCs are
+// finished.
func (s *Server) GracefulStop() {
s.mu.Lock()
defer s.mu.Unlock()
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/handlers.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/handlers.go
index 26e1a8e2..05b384c6 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/handlers.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/handlers.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -45,19 +30,22 @@ type ConnTagInfo struct {
RemoteAddr net.Addr
// LocalAddr is the local address of the corresponding connection.
LocalAddr net.Addr
- // TODO add QOS related fields.
}
// RPCTagInfo defines the relevant information needed by RPC context tagger.
type RPCTagInfo struct {
// FullMethodName is the RPC method in the format of /package.service/method.
FullMethodName string
+ // FailFast indicates if this RPC is failfast.
+ // This field is only valid on client side, it's always false on server side.
+ FailFast bool
}
// Handler defines the interface for the related stats handling (e.g., RPCs, connections).
type Handler interface {
// TagRPC can attach some information to the given context.
- // The returned context is used in the rest lifetime of the RPC.
+ // The context used for the rest lifetime of the RPC will be derived from
+ // the returned context.
TagRPC(context.Context, *RPCTagInfo) context.Context
// HandleRPC processes the RPC stats.
HandleRPC(context.Context, RPCStats)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/stats.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/stats.go
index a82448a6..338a3a75 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/stats.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stats/stats.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -49,7 +34,7 @@ type RPCStats interface {
}
// Begin contains stats when an RPC begins.
-// FailFast are only valid if Client is true.
+// FailFast is only valid if this Begin is from client side.
type Begin struct {
// Client is true if this Begin is from client side.
Client bool
@@ -59,7 +44,7 @@ type Begin struct {
FailFast bool
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *Begin) IsClient() bool { return s.Client }
func (s *Begin) isRPCStats() {}
@@ -80,19 +65,19 @@ type InPayload struct {
RecvTime time.Time
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InPayload) IsClient() bool { return s.Client }
func (s *InPayload) isRPCStats() {}
// InHeader contains stats when a header is received.
-// FullMethod, addresses and Compression are only valid if Client is false.
type InHeader struct {
// Client is true if this InHeader is from client side.
Client bool
// WireLength is the wire length of header.
WireLength int
+ // The following fields are valid only if Client is false.
// FullMethod is the full RPC method string, i.e., /package.service/method.
FullMethod string
// RemoteAddr is the remote address of the corresponding connection.
@@ -103,7 +88,7 @@ type InHeader struct {
Compression string
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InHeader) IsClient() bool { return s.Client }
func (s *InHeader) isRPCStats() {}
@@ -116,7 +101,7 @@ type InTrailer struct {
WireLength int
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if the stats information is from client side.
func (s *InTrailer) IsClient() bool { return s.Client }
func (s *InTrailer) isRPCStats() {}
@@ -137,19 +122,19 @@ type OutPayload struct {
SentTime time.Time
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutPayload) IsClient() bool { return s.Client }
func (s *OutPayload) isRPCStats() {}
// OutHeader contains stats when a header is sent.
-// FullMethod, addresses and Compression are only valid if Client is true.
type OutHeader struct {
// Client is true if this OutHeader is from client side.
Client bool
// WireLength is the wire length of header.
WireLength int
+ // The following fields are valid only if Client is true.
// FullMethod is the full RPC method string, i.e., /package.service/method.
FullMethod string
// RemoteAddr is the remote address of the corresponding connection.
@@ -160,7 +145,7 @@ type OutHeader struct {
Compression string
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutHeader) IsClient() bool { return s.Client }
func (s *OutHeader) isRPCStats() {}
@@ -173,7 +158,7 @@ type OutTrailer struct {
WireLength int
}
-// IsClient indicates if this is from client side.
+// IsClient indicates if this stats information is from client side.
func (s *OutTrailer) IsClient() bool { return s.Client }
func (s *OutTrailer) isRPCStats() {}
@@ -184,7 +169,9 @@ type End struct {
Client bool
// EndTime is the time when the RPC ends.
EndTime time.Time
- // Error is the error just happened. Its type is gRPC error.
+ // Error is the error the RPC ended with. It is an error generated from
+ // status.Status and can be converted back to status.Status using
+ // status.FromError if non-nil.
Error error
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/status/status.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/status/status.go
new file mode 100644
index 00000000..871dc4b3
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/status/status.go
@@ -0,0 +1,168 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// Package status implements errors returned by gRPC. These errors are
+// serialized and transmitted on the wire between server and client, and allow
+// for additional data to be transmitted via the Details field in the status
+// proto. gRPC service handlers should return an error created by this
+// package, and gRPC clients should expect a corresponding error to be
+// returned from the RPC call.
+//
+// This package upholds the invariants that a non-nil error may not
+// contain an OK code, and an OK code must result in a nil error.
+package status
+
+import (
+ "errors"
+ "fmt"
+
+ "github.com/golang/protobuf/proto"
+ "github.com/golang/protobuf/ptypes"
+ spb "google.golang.org/genproto/googleapis/rpc/status"
+ "google.golang.org/grpc/codes"
+)
+
+// statusError is an alias of a status proto. It implements error and Status,
+// and a nil statusError should never be returned by this package.
+type statusError spb.Status
+
+func (se *statusError) Error() string {
+ p := (*spb.Status)(se)
+ return fmt.Sprintf("rpc error: code = %s desc = %s", codes.Code(p.GetCode()), p.GetMessage())
+}
+
+func (se *statusError) status() *Status {
+ return &Status{s: (*spb.Status)(se)}
+}
+
+// Status represents an RPC status code, message, and details. It is immutable
+// and should be created with New, Newf, or FromProto.
+type Status struct {
+ s *spb.Status
+}
+
+// Code returns the status code contained in s.
+func (s *Status) Code() codes.Code {
+ if s == nil || s.s == nil {
+ return codes.OK
+ }
+ return codes.Code(s.s.Code)
+}
+
+// Message returns the message contained in s.
+func (s *Status) Message() string {
+ if s == nil || s.s == nil {
+ return ""
+ }
+ return s.s.Message
+}
+
+// Proto returns s's status as an spb.Status proto message.
+func (s *Status) Proto() *spb.Status {
+ if s == nil {
+ return nil
+ }
+ return proto.Clone(s.s).(*spb.Status)
+}
+
+// Err returns an immutable error representing s; returns nil if s.Code() is
+// OK.
+func (s *Status) Err() error {
+ if s.Code() == codes.OK {
+ return nil
+ }
+ return (*statusError)(s.s)
+}
+
+// New returns a Status representing c and msg.
+func New(c codes.Code, msg string) *Status {
+ return &Status{s: &spb.Status{Code: int32(c), Message: msg}}
+}
+
+// Newf returns New(c, fmt.Sprintf(format, a...)).
+func Newf(c codes.Code, format string, a ...interface{}) *Status {
+ return New(c, fmt.Sprintf(format, a...))
+}
+
+// Error returns an error representing c and msg. If c is OK, returns nil.
+func Error(c codes.Code, msg string) error {
+ return New(c, msg).Err()
+}
+
+// Errorf returns Error(c, fmt.Sprintf(format, a...)).
+func Errorf(c codes.Code, format string, a ...interface{}) error {
+ return Error(c, fmt.Sprintf(format, a...))
+}
+
+// ErrorProto returns an error representing s. If s.Code is OK, returns nil.
+func ErrorProto(s *spb.Status) error {
+ return FromProto(s).Err()
+}
+
+// FromProto returns a Status representing s.
+func FromProto(s *spb.Status) *Status {
+ return &Status{s: proto.Clone(s).(*spb.Status)}
+}
+
+// FromError returns a Status representing err if it was produced from this
+// package, otherwise it returns nil, false.
+func FromError(err error) (s *Status, ok bool) {
+ if err == nil {
+ return &Status{s: &spb.Status{Code: int32(codes.OK)}}, true
+ }
+ if s, ok := err.(*statusError); ok {
+ return s.status(), true
+ }
+ return nil, false
+}
+
+// WithDetails returns a new status with the provided details messages appended to the status.
+// If any errors are encountered, it returns nil and the first error encountered.
+func (s *Status) WithDetails(details ...proto.Message) (*Status, error) {
+ if s.Code() == codes.OK {
+ return nil, errors.New("no error details for status with code OK")
+ }
+ // s.Code() != OK implies that s.Proto() != nil.
+ p := s.Proto()
+ for _, detail := range details {
+ any, err := ptypes.MarshalAny(detail)
+ if err != nil {
+ return nil, err
+ }
+ p.Details = append(p.Details, any)
+ }
+ return &Status{s: p}, nil
+}
+
+// Details returns a slice of details messages attached to the status.
+// If a detail cannot be decoded, the error is returned in place of the detail.
+func (s *Status) Details() []interface{} {
+ if s == nil || s.s == nil {
+ return nil
+ }
+ details := make([]interface{}, 0, len(s.s.Details))
+ for _, any := range s.s.Details {
+ detail := &ptypes.DynamicAny{}
+ if err := ptypes.UnmarshalAny(any, detail); err != nil {
+ details = append(details, err)
+ continue
+ }
+ details = append(details, detail.Message)
+ }
+ return details
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stream.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stream.go
index bb468dc3..1c621ba8 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stream.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/stream.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -37,7 +22,6 @@ import (
"bytes"
"errors"
"io"
- "math"
"sync"
"time"
@@ -45,7 +29,9 @@ import (
"golang.org/x/net/trace"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
+ "google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/transport"
)
@@ -73,11 +59,17 @@ type Stream interface {
// side. On server side, it simply returns the error to the caller.
// SendMsg is called by generated code. Also Users can call SendMsg
// directly when it is really needed in their use cases.
+ // It's safe to have a goroutine calling SendMsg and another goroutine calling
+ // recvMsg on the same stream at the same time.
+ // But it is not safe to call SendMsg on the same stream in different goroutines.
SendMsg(m interface{}) error
// RecvMsg blocks until it receives a message or the stream is
// done. On client side, it returns io.EOF when the stream is done. On
// any other error, it aborts the stream and returns an RPC status. On
// server side, it simply returns the error to the caller.
+ // It's safe to have a goroutine calling SendMsg and another goroutine calling
+ // recvMsg on the same stream at the same time.
+ // But it is not safe to call RecvMsg on the same stream in different goroutines.
RecvMsg(m interface{}) error
}
@@ -93,6 +85,11 @@ type ClientStream interface {
// CloseSend closes the send direction of the stream. It closes the stream
// when non-nil error is met.
CloseSend() error
+ // Stream.SendMsg() may return a non-nil error when something wrong happens sending
+ // the request. The returned error indicates the status of this sending, not the final
+ // status of the RPC.
+ // Always call Stream.RecvMsg() to get the final status if you care about the status of
+ // the RPC.
Stream
}
@@ -113,25 +110,39 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
cancel context.CancelFunc
)
c := defaultCallInfo
- if mc, ok := cc.getMethodConfig(method); ok {
- c.failFast = !mc.WaitForReady
- if mc.Timeout > 0 {
- ctx, cancel = context.WithTimeout(ctx, mc.Timeout)
- }
+ mc := cc.GetMethodConfig(method)
+ if mc.WaitForReady != nil {
+ c.failFast = !*mc.WaitForReady
+ }
+
+ if mc.Timeout != nil {
+ ctx, cancel = context.WithTimeout(ctx, *mc.Timeout)
}
+
+ opts = append(cc.dopts.callOptions, opts...)
for _, o := range opts {
if err := o.before(&c); err != nil {
return nil, toRPCErr(err)
}
}
+ c.maxSendMessageSize = getMaxSize(mc.MaxReqSize, c.maxSendMessageSize, defaultClientMaxSendMessageSize)
+ c.maxReceiveMessageSize = getMaxSize(mc.MaxRespSize, c.maxReceiveMessageSize, defaultClientMaxReceiveMessageSize)
+
callHdr := &transport.CallHdr{
Host: cc.authority,
Method: method,
- Flush: desc.ServerStreams && desc.ClientStreams,
+ // If it's not client streaming, we should already have the request to be sent,
+ // so we don't flush the header.
+ // If it's client streaming, the user may never send a request or send it any
+ // time soon, so we ask the transport to flush the header.
+ Flush: desc.ClientStreams,
}
if cc.dopts.cp != nil {
callHdr.SendCompress = cc.dopts.cp.Type()
}
+ if c.creds != nil {
+ callHdr.Creds = c.creds
+ }
var trInfo traceInfo
if EnableTracing {
trInfo.tr = trace.New("grpc.Sent."+methodFamily(method), method)
@@ -151,26 +162,27 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
}
}()
}
+ ctx = newContextWithRPCInfo(ctx)
sh := cc.dopts.copts.StatsHandler
if sh != nil {
- ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method})
+ ctx = sh.TagRPC(ctx, &stats.RPCTagInfo{FullMethodName: method, FailFast: c.failFast})
begin := &stats.Begin{
Client: true,
BeginTime: time.Now(),
FailFast: c.failFast,
}
sh.HandleRPC(ctx, begin)
- }
- defer func() {
- if err != nil && sh != nil {
- // Only handle end stats if err != nil.
- end := &stats.End{
- Client: true,
- Error: err,
+ defer func() {
+ if err != nil {
+ // Only handle end stats if err != nil.
+ end := &stats.End{
+ Client: true,
+ Error: err,
+ }
+ sh.HandleRPC(ctx, end)
}
- sh.HandleRPC(ctx, end)
- }
- }()
+ }()
+ }
gopts := BalancerGetOptions{
BlockingWait: !c.failFast,
}
@@ -178,7 +190,7 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
t, put, err = cc.getTransport(ctx, gopts)
if err != nil {
// TODO(zhaoq): Probably revisit the error handling.
- if _, ok := err.(*rpcError); ok {
+ if _, ok := status.FromError(err); ok {
return nil, err
}
if err == errConnClosing || err == errConnUnavailable {
@@ -193,20 +205,27 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
s, err = t.NewStream(ctx, callHdr)
if err != nil {
+ if _, ok := err.(transport.ConnectionError); ok && put != nil {
+ // If error is connection error, transport was sending data on wire,
+ // and we are not sure if anything has been sent on wire.
+ // If error is not connection error, we are sure nothing has been sent.
+ updateRPCInfoInContext(ctx, rpcInfo{bytesSent: true, bytesReceived: false})
+ }
if put != nil {
put()
put = nil
}
- if _, ok := err.(transport.ConnectionError); ok || err == transport.ErrStreamDrain {
- if c.failFast {
- return nil, toRPCErr(err)
- }
+ if _, ok := err.(transport.ConnectionError); (ok || err == transport.ErrStreamDrain) && !c.failFast {
continue
}
return nil, toRPCErr(err)
}
break
}
+ // Set callInfo.peer object from stream's context.
+ if peer, ok := peer.FromContext(s.Context()); ok {
+ c.peer = peer
+ }
cs := &clientStream{
opts: opts,
c: c,
@@ -236,14 +255,13 @@ func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, meth
select {
case <-t.Error():
// Incur transport error, simply exit.
+ case <-cc.ctx.Done():
+ cs.finish(ErrClientConnClosing)
+ cs.closeTransportStream(ErrClientConnClosing)
case <-s.Done():
// TODO: The trace of the RPC is terminated here when there is no pending
// I/O, which is probably not the optimal solution.
- if s.StatusCode() == codes.OK {
- cs.finish(nil)
- } else {
- cs.finish(Errorf(s.StatusCode(), "%s", s.StatusDesc()))
- }
+ cs.finish(s.Status().Err())
cs.closeTransportStream(nil)
case <-s.GoAway():
cs.finish(errConnDrain)
@@ -273,9 +291,10 @@ type clientStream struct {
tracing bool // set to EnableTracing when the clientStream is created.
- mu sync.Mutex
- put func()
- closed bool
+ mu sync.Mutex
+ put func()
+ closed bool
+ finished bool
// trInfo.tr is set when the clientStream is created (if EnableTracing is true),
// and is set to nil when the clientStream's finish method is called.
trInfo traceInfo
@@ -350,7 +369,13 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
}
}()
if err != nil {
- return Errorf(codes.Internal, "grpc: %v", err)
+ return err
+ }
+ if cs.c.maxSendMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxSendMessageSize field uninitialized(nil)")
+ }
+ if len(out) > *cs.c.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(out), *cs.c.maxSendMessageSize)
}
err = cs.t.Write(cs.s, out, &transport.Options{Last: false})
if err == nil && outPayload != nil {
@@ -361,28 +386,16 @@ func (cs *clientStream) SendMsg(m interface{}) (err error) {
}
func (cs *clientStream) RecvMsg(m interface{}) (err error) {
- defer func() {
- if err != nil && cs.statsHandler != nil {
- // Only generate End if err != nil.
- // If err == nil, it's not the last RecvMsg.
- // The last RecvMsg gets either an RPC error or io.EOF.
- end := &stats.End{
- Client: true,
- EndTime: time.Now(),
- }
- if err != io.EOF {
- end.Error = toRPCErr(err)
- }
- cs.statsHandler.HandleRPC(cs.statsCtx, end)
- }
- }()
var inPayload *stats.InPayload
if cs.statsHandler != nil {
inPayload = &stats.InPayload{
Client: true,
}
}
- err = recv(cs.p, cs.codec, cs.s, cs.dc, m, math.MaxInt32, inPayload)
+ if cs.c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, inPayload)
defer func() {
// err != nil indicates the termination of the stream.
if err != nil {
@@ -405,17 +418,20 @@ func (cs *clientStream) RecvMsg(m interface{}) (err error) {
}
// Special handling for client streaming rpc.
// This recv expects EOF or errors, so we don't collect inPayload.
- err = recv(cs.p, cs.codec, cs.s, cs.dc, m, math.MaxInt32, nil)
+ if cs.c.maxReceiveMessageSize == nil {
+ return Errorf(codes.Internal, "callInfo maxReceiveMessageSize field uninitialized(nil)")
+ }
+ err = recv(cs.p, cs.codec, cs.s, cs.dc, m, *cs.c.maxReceiveMessageSize, nil)
cs.closeTransportStream(err)
if err == nil {
return toRPCErr(errors.New("grpc: client streaming protocol violation: get <nil>, want <EOF>"))
}
if err == io.EOF {
- if cs.s.StatusCode() == codes.OK {
- cs.finish(err)
- return nil
+ if se := cs.s.Status().Err(); se != nil {
+ return se
}
- return Errorf(cs.s.StatusCode(), "%s", cs.s.StatusDesc())
+ cs.finish(err)
+ return nil
}
return toRPCErr(err)
}
@@ -423,11 +439,11 @@ func (cs *clientStream) RecvMsg(m interface{}) (err error) {
cs.closeTransportStream(err)
}
if err == io.EOF {
- if cs.s.StatusCode() == codes.OK {
- // Returns io.EOF to indicate the end of the stream.
- return
+ if statusErr := cs.s.Status().Err(); statusErr != nil {
+ return statusErr
}
- return Errorf(cs.s.StatusCode(), "%s", cs.s.StatusDesc())
+ // Returns io.EOF to indicate the end of the stream.
+ return
}
return toRPCErr(err)
}
@@ -461,20 +477,39 @@ func (cs *clientStream) closeTransportStream(err error) {
}
func (cs *clientStream) finish(err error) {
+ cs.mu.Lock()
+ defer cs.mu.Unlock()
+ if cs.finished {
+ return
+ }
+ cs.finished = true
defer func() {
if cs.cancel != nil {
cs.cancel()
}
}()
- cs.mu.Lock()
- defer cs.mu.Unlock()
for _, o := range cs.opts {
o.after(&cs.c)
}
if cs.put != nil {
+ updateRPCInfoInContext(cs.s.Context(), rpcInfo{
+ bytesSent: cs.s.BytesSent(),
+ bytesReceived: cs.s.BytesReceived(),
+ })
cs.put()
cs.put = nil
}
+ if cs.statsHandler != nil {
+ end := &stats.End{
+ Client: true,
+ EndTime: time.Now(),
+ }
+ if err != io.EOF {
+ // end.Error is nil if the RPC finished successfully.
+ end.Error = toRPCErr(err)
+ }
+ cs.statsHandler.HandleRPC(cs.statsCtx, end)
+ }
if !cs.tracing {
return
}
@@ -511,17 +546,16 @@ type ServerStream interface {
// serverStream implements a server side Stream.
type serverStream struct {
- t transport.ServerTransport
- s *transport.Stream
- p *parser
- codec Codec
- cp Compressor
- dc Decompressor
- cbuf *bytes.Buffer
- maxMsgSize int
- statusCode codes.Code
- statusDesc string
- trInfo *traceInfo
+ t transport.ServerTransport
+ s *transport.Stream
+ p *parser
+ codec Codec
+ cp Compressor
+ dc Decompressor
+ cbuf *bytes.Buffer
+ maxReceiveMessageSize int
+ maxSendMessageSize int
+ trInfo *traceInfo
statsHandler stats.Handler
@@ -577,9 +611,11 @@ func (ss *serverStream) SendMsg(m interface{}) (err error) {
}
}()
if err != nil {
- err = Errorf(codes.Internal, "grpc: %v", err)
return err
}
+ if len(out) > ss.maxSendMessageSize {
+ return Errorf(codes.ResourceExhausted, "trying to send message larger than max (%d vs. %d)", len(out), ss.maxSendMessageSize)
+ }
if err := ss.t.Write(ss.s, out, &transport.Options{Last: false}); err != nil {
return toRPCErr(err)
}
@@ -609,7 +645,7 @@ func (ss *serverStream) RecvMsg(m interface{}) (err error) {
if ss.statsHandler != nil {
inPayload = &stats.InPayload{}
}
- if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxMsgSize, inPayload); err != nil {
+ if err := recv(ss.p, ss.codec, ss.s, ss.dc, m, ss.maxReceiveMessageSize, inPayload); err != nil {
if err == io.EOF {
return err
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/tap/tap.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/tap/tap.go
index 0f366476..decb6786 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/tap/tap.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/tap/tap.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2016, Google Inc.
- * All rights reserved.
+ * Copyright 2016 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/trace.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/trace.go
index f6747e1d..b419c9e3 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/trace.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/trace.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2015, Google Inc.
- * All rights reserved.
+ * Copyright 2015 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/bdp_estimator.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/bdp_estimator.go
new file mode 100644
index 00000000..667edb80
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/bdp_estimator.go
@@ -0,0 +1,143 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package transport
+
+import (
+ "sync"
+ "time"
+)
+
+const (
+ // bdpLimit is the maximum value the flow control windows
+ // will be increased to.
+ bdpLimit = (1 << 20) * 4
+ // alpha is a constant factor used to keep a moving average
+ // of RTTs.
+ alpha = 0.9
+ // If the current bdp sample is greater than or equal to
+ // our beta * our estimated bdp and the current bandwidth
+ // sample is the maximum bandwidth observed so far, we
+ // increase our bbp estimate by a factor of gamma.
+ beta = 0.66
+ // To put our bdp to be smaller than or equal to twice the real BDP,
+ // we should multiply our current sample with 4/3, however to round things out
+ // we use 2 as the multiplication factor.
+ gamma = 2
+)
+
+var (
+ // Adding arbitrary data to ping so that its ack can be
+ // identified.
+ // Easter-egg: what does the ping message say?
+ bdpPing = &ping{data: [8]byte{2, 4, 16, 16, 9, 14, 7, 7}}
+)
+
+type bdpEstimator struct {
+ // sentAt is the time when the ping was sent.
+ sentAt time.Time
+
+ mu sync.Mutex
+ // bdp is the current bdp estimate.
+ bdp uint32
+ // sample is the number of bytes received in one measurement cycle.
+ sample uint32
+ // bwMax is the maximum bandwidth noted so far (bytes/sec).
+ bwMax float64
+ // bool to keep track of the begining of a new measurement cycle.
+ isSent bool
+ // Callback to update the window sizes.
+ updateFlowControl func(n uint32)
+ // sampleCount is the number of samples taken so far.
+ sampleCount uint64
+ // round trip time (seconds)
+ rtt float64
+}
+
+// timesnap registers the time bdp ping was sent out so that
+// network rtt can be calculated when its ack is recieved.
+// It is called (by controller) when the bdpPing is
+// being written on the wire.
+func (b *bdpEstimator) timesnap(d [8]byte) {
+ if bdpPing.data != d {
+ return
+ }
+ b.sentAt = time.Now()
+}
+
+// add adds bytes to the current sample for calculating bdp.
+// It returns true only if a ping must be sent. This can be used
+// by the caller (handleData) to make decision about batching
+// a window update with it.
+func (b *bdpEstimator) add(n uint32) bool {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if b.bdp == bdpLimit {
+ return false
+ }
+ if !b.isSent {
+ b.isSent = true
+ b.sample = n
+ b.sentAt = time.Time{}
+ b.sampleCount++
+ return true
+ }
+ b.sample += n
+ return false
+}
+
+// calculate is called when an ack for a bdp ping is received.
+// Here we calculate the current bdp and bandwidth sample and
+// decide if the flow control windows should go up.
+func (b *bdpEstimator) calculate(d [8]byte) {
+ // Check if the ping acked for was the bdp ping.
+ if bdpPing.data != d {
+ return
+ }
+ b.mu.Lock()
+ rttSample := time.Since(b.sentAt).Seconds()
+ if b.sampleCount < 10 {
+ // Bootstrap rtt with an average of first 10 rtt samples.
+ b.rtt += (rttSample - b.rtt) / float64(b.sampleCount)
+ } else {
+ // Heed to the recent past more.
+ b.rtt += (rttSample - b.rtt) * float64(alpha)
+ }
+ b.isSent = false
+ // The number of bytes accumalated so far in the sample is smaller
+ // than or equal to 1.5 times the real BDP on a saturated connection.
+ bwCurrent := float64(b.sample) / (b.rtt * float64(1.5))
+ if bwCurrent > b.bwMax {
+ b.bwMax = bwCurrent
+ }
+ // If the current sample (which is smaller than or equal to the 1.5 times the real BDP) is
+ // greater than or equal to 2/3rd our perceived bdp AND this is the maximum bandwidth seen so far, we
+ // should update our perception of the network BDP.
+ if float64(b.sample) >= beta*float64(b.bdp) && bwCurrent == b.bwMax && b.bdp != bdpLimit {
+ sampleFloat := float64(b.sample)
+ b.bdp = uint32(gamma * sampleFloat)
+ if b.bdp > bdpLimit {
+ b.bdp = bdpLimit
+ }
+ bdp := b.bdp
+ b.mu.Unlock()
+ b.updateFlowControl(bdp)
+ return
+ }
+ b.mu.Unlock()
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/control.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/control.go
index 2586cba4..501eb03c 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/control.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/control.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -35,7 +20,9 @@ package transport
import (
"fmt"
+ "math"
"sync"
+ "time"
"golang.org/x/net/http2"
)
@@ -44,8 +31,19 @@ const (
// The default value of flow control window size in HTTP2 spec.
defaultWindowSize = 65535
// The initial window size for flow control.
- initialWindowSize = defaultWindowSize // for an RPC
- initialConnWindowSize = defaultWindowSize * 16 // for a connection
+ initialWindowSize = defaultWindowSize // for an RPC
+ infinity = time.Duration(math.MaxInt64)
+ defaultClientKeepaliveTime = infinity
+ defaultClientKeepaliveTimeout = time.Duration(20 * time.Second)
+ defaultMaxStreamsClient = 100
+ defaultMaxConnectionIdle = infinity
+ defaultMaxConnectionAge = infinity
+ defaultMaxConnectionAgeGrace = infinity
+ defaultServerKeepaliveTime = time.Duration(2 * time.Hour)
+ defaultServerKeepaliveTimeout = time.Duration(20 * time.Second)
+ defaultKeepalivePolicyMinTime = time.Duration(5 * time.Minute)
+ // max window limit set by HTTP2 Specs.
+ maxWindowSize = math.MaxInt32
)
// The following defines various control items which could flow through
@@ -54,6 +52,7 @@ const (
type windowUpdate struct {
streamID uint32
increment uint32
+ flush bool
}
func (*windowUpdate) item() {}
@@ -73,6 +72,10 @@ type resetStream struct {
func (*resetStream) item() {}
type goAway struct {
+ code http2.ErrCode
+ debugData []byte
+ headsUp bool
+ closeConn bool
}
func (*goAway) item() {}
@@ -143,16 +146,59 @@ func (qb *quotaPool) acquire() <-chan int {
// inFlow deals with inbound flow control
type inFlow struct {
+ mu sync.Mutex
// The inbound flow control limit for pending data.
limit uint32
-
- mu sync.Mutex
// pendingData is the overall data which have been received but not been
// consumed by applications.
pendingData uint32
// The amount of data the application has consumed but grpc has not sent
// window update for them. Used to reduce window update frequency.
pendingUpdate uint32
+ // delta is the extra window update given by receiver when an application
+ // is reading data bigger in size than the inFlow limit.
+ delta uint32
+}
+
+// newLimit updates the inflow window to a new value n.
+// It assumes that n is always greater than the old limit.
+func (f *inFlow) newLimit(n uint32) uint32 {
+ f.mu.Lock()
+ defer f.mu.Unlock()
+ d := n - f.limit
+ f.limit = n
+ return d
+}
+
+func (f *inFlow) maybeAdjust(n uint32) uint32 {
+ if n > uint32(math.MaxInt32) {
+ n = uint32(math.MaxInt32)
+ }
+ f.mu.Lock()
+ defer f.mu.Unlock()
+ // estSenderQuota is the receiver's view of the maximum number of bytes the sender
+ // can send without a window update.
+ estSenderQuota := int32(f.limit - (f.pendingData + f.pendingUpdate))
+ // estUntransmittedData is the maximum number of bytes the sends might not have put
+ // on the wire yet. A value of 0 or less means that we have already received all or
+ // more bytes than the application is requesting to read.
+ estUntransmittedData := int32(n - f.pendingData) // Casting into int32 since it could be negative.
+ // This implies that unless we send a window update, the sender won't be able to send all the bytes
+ // for this message. Therefore we must send an update over the limit since there's an active read
+ // request from the application.
+ if estUntransmittedData > estSenderQuota {
+ // Sender's window shouldn't go more than 2^31 - 1 as speecified in the HTTP spec.
+ if f.limit+n > maxWindowSize {
+ f.delta = maxWindowSize - f.limit
+ } else {
+ // Send a window update for the whole message and not just the difference between
+ // estUntransmittedData and estSenderQuota. This will be helpful in case the message
+ // is padded; We will fallback on the current available window(at least a 1/4th of the limit).
+ f.delta = n
+ }
+ return f.delta
+ }
+ return 0
}
// onData is invoked when some data frame is received. It updates pendingData.
@@ -160,7 +206,7 @@ func (f *inFlow) onData(n uint32) error {
f.mu.Lock()
defer f.mu.Unlock()
f.pendingData += n
- if f.pendingData+f.pendingUpdate > f.limit {
+ if f.pendingData+f.pendingUpdate > f.limit+f.delta {
return fmt.Errorf("received %d-bytes data exceeding the limit %d bytes", f.pendingData+f.pendingUpdate, f.limit)
}
return nil
@@ -175,6 +221,13 @@ func (f *inFlow) onRead(n uint32) uint32 {
return 0
}
f.pendingData -= n
+ if n > f.delta {
+ n -= f.delta
+ f.delta = 0
+ } else {
+ f.delta -= n
+ n = 0
+ }
f.pendingUpdate += n
if f.pendingUpdate >= f.limit/4 {
wu := f.pendingUpdate
@@ -184,10 +237,10 @@ func (f *inFlow) onRead(n uint32) uint32 {
return 0
}
-func (f *inFlow) resetPendingData() uint32 {
+func (f *inFlow) resetPendingUpdate() uint32 {
f.mu.Lock()
defer f.mu.Unlock()
- n := f.pendingData
- f.pendingData = 0
+ n := f.pendingUpdate
+ f.pendingUpdate = 0
return n
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go16.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go16.go
index ee1c46ba..7cffee11 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go16.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go16.go
@@ -1,34 +1,20 @@
// +build go1.6,!go1.7
/*
- * Copyright 2016, Google Inc.
- * All rights reserved.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Copyright 2016 gRPC authors.
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -37,6 +23,8 @@ package transport
import (
"net"
+ "google.golang.org/grpc/codes"
+
"golang.org/x/net/context"
)
@@ -44,3 +32,14 @@ import (
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
return (&net.Dialer{Cancel: ctx.Done()}).Dial(network, address)
}
+
+// ContextErr converts the error from context package into a StreamError.
+func ContextErr(err error) StreamError {
+ switch err {
+ case context.DeadlineExceeded:
+ return streamErrorf(codes.DeadlineExceeded, "%v", err)
+ case context.Canceled:
+ return streamErrorf(codes.Canceled, "%v", err)
+ }
+ return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go17.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go17.go
index 356f13ff..2464e69f 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go17.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/go17.go
@@ -1,46 +1,46 @@
// +build go1.7
/*
- * Copyright 2016, Google Inc.
- * All rights reserved.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Copyright 2016 gRPC authors.
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
package transport
import (
+ "context"
"net"
- "golang.org/x/net/context"
+ "google.golang.org/grpc/codes"
+
+ netctx "golang.org/x/net/context"
)
// dialContext connects to the address on the named network.
func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
return (&net.Dialer{}).DialContext(ctx, network, address)
}
+
+// ContextErr converts the error from context package into a StreamError.
+func ContextErr(err error) StreamError {
+ switch err {
+ case context.DeadlineExceeded, netctx.DeadlineExceeded:
+ return streamErrorf(codes.DeadlineExceeded, "%v", err)
+ case context.Canceled, netctx.Canceled:
+ return streamErrorf(codes.Canceled, "%v", err)
+ }
+ return streamErrorf(codes.Internal, "Unexpected error from context packet: %v", err)
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/handler_server.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/handler_server.go
index 10b6dc0b..27372b50 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/handler_server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/handler_server.go
@@ -1,32 +1,18 @@
/*
- * Copyright 2016, Google Inc.
- * All rights reserved.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Copyright 2016 gRPC authors.
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -53,6 +39,7 @@ import (
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
+ "google.golang.org/grpc/status"
)
// NewServerHandlerTransport returns a ServerTransport handling gRPC
@@ -101,14 +88,9 @@ func NewServerHandlerTransport(w http.ResponseWriter, r *http.Request) (ServerTr
continue
}
for _, v := range vv {
- if k == "user-agent" {
- // user-agent is special. Copying logic of http_util.go.
- if i := strings.LastIndex(v, " "); i == -1 {
- // There is no application user agent string being set
- continue
- } else {
- v = v[:i]
- }
+ v, err := decodeMetadataHeader(k, v)
+ if err != nil {
+ return nil, streamErrorf(codes.InvalidArgument, "malformed binary metadata: %v", err)
}
metakv = append(metakv, k, v)
}
@@ -174,15 +156,22 @@ func (a strAddr) String() string { return string(a) }
// do runs fn in the ServeHTTP goroutine.
func (ht *serverHandlerTransport) do(fn func()) error {
+ // Avoid a panic writing to closed channel. Imperfect but maybe good enough.
select {
- case ht.writes <- fn:
- return nil
case <-ht.closedCh:
return ErrConnClosing
+ default:
+ select {
+ case ht.writes <- fn:
+ return nil
+ case <-ht.closedCh:
+ return ErrConnClosing
+ }
+
}
}
-func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error {
+func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) error {
err := ht.do(func() {
ht.writeCommonHeaders(s)
@@ -192,10 +181,13 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code,
ht.rw.(http.Flusher).Flush()
h := ht.rw.Header()
- h.Set("Grpc-Status", fmt.Sprintf("%d", statusCode))
- if statusDesc != "" {
- h.Set("Grpc-Message", encodeGrpcMessage(statusDesc))
+ h.Set("Grpc-Status", fmt.Sprintf("%d", st.Code()))
+ if m := st.Message(); m != "" {
+ h.Set("Grpc-Message", encodeGrpcMessage(m))
}
+
+ // TODO: Support Grpc-Status-Details-Bin
+
if md := s.Trailer(); len(md) > 0 {
for k, vv := range md {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
@@ -203,10 +195,9 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, statusCode codes.Code,
continue
}
for _, v := range vv {
- // http2 ResponseWriter mechanism to
- // send undeclared Trailers after the
- // headers have possibly been written.
- h.Add(http2.TrailerPrefix+k, v)
+ // http2 ResponseWriter mechanism to send undeclared Trailers after
+ // the headers have possibly been written.
+ h.Add(http2.TrailerPrefix+k, encodeMetadataHeader(k, v))
}
}
}
@@ -234,6 +225,7 @@ func (ht *serverHandlerTransport) writeCommonHeaders(s *Stream) {
// and https://golang.org/pkg/net/http/#example_ResponseWriter_trailers
h.Add("Trailer", "Grpc-Status")
h.Add("Trailer", "Grpc-Message")
+ // TODO: Support Grpc-Status-Details-Bin
if s.sendCompress != "" {
h.Set("Grpc-Encoding", s.sendCompress)
@@ -260,6 +252,7 @@ func (ht *serverHandlerTransport) WriteHeader(s *Stream, md metadata.MD) error {
continue
}
for _, v := range vv {
+ v = encodeMetadataHeader(k, v)
h.Add(k, v)
}
}
@@ -300,13 +293,13 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
req := ht.req
s := &Stream{
- id: 0, // irrelevant
- windowHandler: func(int) {}, // nothing
- cancel: cancel,
- buf: newRecvBuffer(),
- st: ht,
- method: req.URL.Path,
- recvCompress: req.Header.Get("grpc-encoding"),
+ id: 0, // irrelevant
+ requestRead: func(int) {},
+ cancel: cancel,
+ buf: newRecvBuffer(),
+ st: ht,
+ method: req.URL.Path,
+ recvCompress: req.Header.Get("grpc-encoding"),
}
pr := &peer.Peer{
Addr: ht.RemoteAddr(),
@@ -314,10 +307,13 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
if req.TLS != nil {
pr.AuthInfo = credentials.TLSInfo{State: *req.TLS}
}
- ctx = metadata.NewContext(ctx, ht.headerMD)
+ ctx = metadata.NewIncomingContext(ctx, ht.headerMD)
ctx = peer.NewContext(ctx, pr)
s.ctx = newContextWithStream(ctx, s)
- s.dec = &recvBufferReader{ctx: s.ctx, recv: s.buf}
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{ctx: s.ctx, recv: s.buf},
+ windowHandler: func(int) {},
+ }
// readerDone is closed when the Body.Read-ing goroutine exits.
readerDone := make(chan struct{})
@@ -329,11 +325,11 @@ func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), trace
for buf := make([]byte, readSize); ; {
n, err := req.Body.Read(buf)
if n > 0 {
- s.buf.put(&recvMsg{data: buf[:n:n]})
+ s.buf.put(recvMsg{data: buf[:n:n]})
buf = buf[n:]
}
if err != nil {
- s.buf.put(&recvMsg{err: mapRecvMsgError(err)})
+ s.buf.put(recvMsg{err: mapRecvMsgError(err)})
return
}
if len(buf) == 0 {
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_client.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_client.go
index 892f8ba6..516ea06a 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_client.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_client.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -35,12 +20,12 @@ package transport
import (
"bytes"
- "fmt"
"io"
"math"
"net"
"strings"
"sync"
+ "sync/atomic"
"time"
"golang.org/x/net/context"
@@ -48,10 +33,11 @@ import (
"golang.org/x/net/http2/hpack"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
- "google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
)
// http2Client implements the ClientTransport interface with HTTP2.
@@ -80,6 +66,8 @@ type http2Client struct {
// goAway is closed to notify the upper layer (i.e., addrConn.transportMonitor)
// that the server sent GoAway on this transport.
goAway chan struct{}
+ // awakenKeepalive is used to wake up keepalive when after it has gone dormant.
+ awakenKeepalive chan struct{}
framer *framer
hBuf *bytes.Buffer // the buffer for HPACK encoding
@@ -87,7 +75,7 @@ type http2Client struct {
// controlBuf delivers all the control related tasks (e.g., window
// updates, reset streams, and various settings) to the controller.
- controlBuf *recvBuffer
+ controlBuf *controlBuffer
fc *inFlow
// sendQuotaPool provides flow control to outbound message.
sendQuotaPool *quotaPool
@@ -97,10 +85,22 @@ type http2Client struct {
// The scheme used: https if TLS is on, http otherwise.
scheme string
+ isSecure bool
+
creds []credentials.PerRPCCredentials
+ // Boolean to keep track of reading activity on transport.
+ // 1 is true and 0 is false.
+ activity uint32 // Accessed atomically.
+ kp keepalive.ClientParameters
+
statsHandler stats.Handler
+ initialWindowSize int32
+
+ bdpEst *bdpEstimator
+ outQuotaVersion uint32
+
mu sync.Mutex // guard the following variables
state transportState // the state of underlying connection
activeStreams map[uint32]*Stream
@@ -108,10 +108,11 @@ type http2Client struct {
maxStreams int
// the per-stream outbound flow control window size set by the peer.
streamSendQuota uint32
- // goAwayID records the Last-Stream-ID in the GoAway frame from the server.
- goAwayID uint32
// prevGoAway ID records the Last-Stream-ID in the previous GOAway frame.
prevGoAwayID uint32
+ // goAwayReason records the http2.ErrCode and debug data received with the
+ // GoAway frame.
+ goAwayReason GoAwayReason
}
func dial(ctx context.Context, fn func(context.Context, string) (net.Conn, error), addr string) (net.Conn, error) {
@@ -157,9 +158,9 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
conn, err := dial(ctx, opts.Dialer, addr.Addr)
if err != nil {
if opts.FailOnNonTempDialError {
- return nil, connectionErrorf(isTemporary(err), err, "transport: %v", err)
+ return nil, connectionErrorf(isTemporary(err), err, "transport: error while dialing: %v", err)
}
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: Error while dialing %v", err)
}
// Any further errors will close the underlying connection
defer func(conn net.Conn) {
@@ -167,7 +168,10 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
conn.Close()
}
}(conn)
- var authInfo credentials.AuthInfo
+ var (
+ isSecure bool
+ authInfo credentials.AuthInfo
+ )
if creds := opts.TransportCredentials; creds != nil {
scheme = "https"
conn, authInfo, err = creds.ClientHandshake(ctx, addr.Addr, conn)
@@ -175,43 +179,72 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
// Credentials handshake errors are typically considered permanent
// to avoid retrying on e.g. bad certificates.
temp := isTemporary(err)
- return nil, connectionErrorf(temp, err, "transport: %v", err)
+ return nil, connectionErrorf(temp, err, "transport: authentication handshake failed: %v", err)
}
+ isSecure = true
}
- ua := primaryUA
- if opts.UserAgent != "" {
- ua = opts.UserAgent + " " + ua
+ kp := opts.KeepaliveParams
+ // Validate keepalive parameters.
+ if kp.Time == 0 {
+ kp.Time = defaultClientKeepaliveTime
+ }
+ if kp.Timeout == 0 {
+ kp.Timeout = defaultClientKeepaliveTimeout
+ }
+ dynamicWindow := true
+ icwz := int32(initialWindowSize)
+ if opts.InitialConnWindowSize >= defaultWindowSize {
+ icwz = opts.InitialConnWindowSize
+ dynamicWindow = false
}
var buf bytes.Buffer
t := &http2Client{
ctx: ctx,
target: addr.Addr,
- userAgent: ua,
+ userAgent: opts.UserAgent,
md: addr.Metadata,
conn: conn,
remoteAddr: conn.RemoteAddr(),
localAddr: conn.LocalAddr(),
authInfo: authInfo,
// The client initiated stream id is odd starting from 1.
- nextID: 1,
- writableChan: make(chan int, 1),
- shutdownChan: make(chan struct{}),
- errorChan: make(chan struct{}),
- goAway: make(chan struct{}),
- framer: newFramer(conn),
- hBuf: &buf,
- hEnc: hpack.NewEncoder(&buf),
- controlBuf: newRecvBuffer(),
- fc: &inFlow{limit: initialConnWindowSize},
- sendQuotaPool: newQuotaPool(defaultWindowSize),
- scheme: scheme,
- state: reachable,
- activeStreams: make(map[uint32]*Stream),
- creds: opts.PerRPCCredentials,
- maxStreams: math.MaxInt32,
- streamSendQuota: defaultWindowSize,
- statsHandler: opts.StatsHandler,
+ nextID: 1,
+ writableChan: make(chan int, 1),
+ shutdownChan: make(chan struct{}),
+ errorChan: make(chan struct{}),
+ goAway: make(chan struct{}),
+ awakenKeepalive: make(chan struct{}, 1),
+ framer: newFramer(conn),
+ hBuf: &buf,
+ hEnc: hpack.NewEncoder(&buf),
+ controlBuf: newControlBuffer(),
+ fc: &inFlow{limit: uint32(icwz)},
+ sendQuotaPool: newQuotaPool(defaultWindowSize),
+ scheme: scheme,
+ state: reachable,
+ activeStreams: make(map[uint32]*Stream),
+ isSecure: isSecure,
+ creds: opts.PerRPCCredentials,
+ maxStreams: defaultMaxStreamsClient,
+ streamsQuota: newQuotaPool(defaultMaxStreamsClient),
+ streamSendQuota: defaultWindowSize,
+ kp: kp,
+ statsHandler: opts.StatsHandler,
+ initialWindowSize: initialWindowSize,
+ }
+ if opts.InitialWindowSize >= defaultWindowSize {
+ t.initialWindowSize = opts.InitialWindowSize
+ dynamicWindow = false
+ }
+ if dynamicWindow {
+ t.bdpEst = &bdpEstimator{
+ bdp: initialWindowSize,
+ updateFlowControl: t.updateFlowControl,
+ }
}
+ // Make sure awakenKeepalive can't be written upon.
+ // keepalive routine will make it writable, if need be.
+ t.awakenKeepalive <- struct{}{}
if t.statsHandler != nil {
t.ctx = t.statsHandler.TagConn(t.ctx, &stats.ConnTagInfo{
RemoteAddr: t.remoteAddr,
@@ -230,32 +263,35 @@ func newHTTP2Client(ctx context.Context, addr TargetInfo, opts ConnectOptions) (
n, err := t.conn.Write(clientPreface)
if err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write client preface: %v", err)
}
if n != len(clientPreface) {
t.Close()
return nil, connectionErrorf(true, err, "transport: preface mismatch, wrote %d bytes; want %d", n, len(clientPreface))
}
- if initialWindowSize != defaultWindowSize {
+ if t.initialWindowSize != defaultWindowSize {
err = t.framer.writeSettings(true, http2.Setting{
ID: http2.SettingInitialWindowSize,
- Val: uint32(initialWindowSize),
+ Val: uint32(t.initialWindowSize),
})
} else {
err = t.framer.writeSettings(true)
}
if err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write initial settings frame: %v", err)
}
// Adjust the connection flow control window if needed.
- if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
+ if delta := uint32(icwz - defaultWindowSize); delta > 0 {
if err := t.framer.writeWindowUpdate(true, 0, delta); err != nil {
t.Close()
- return nil, connectionErrorf(true, err, "transport: %v", err)
+ return nil, connectionErrorf(true, err, "transport: failed to write window update: %v", err)
}
}
go t.controller()
+ if t.kp.Time != infinity {
+ go t.keepalive()
+ }
t.writableChan <- 0
return t, nil
}
@@ -269,27 +305,33 @@ func (t *http2Client) newStream(ctx context.Context, callHdr *CallHdr) *Stream {
method: callHdr.Method,
sendCompress: callHdr.SendCompress,
buf: newRecvBuffer(),
- fc: &inFlow{limit: initialWindowSize},
+ fc: &inFlow{limit: uint32(t.initialWindowSize)},
sendQuotaPool: newQuotaPool(int(t.streamSendQuota)),
headerChan: make(chan struct{}),
}
t.nextID += 2
- s.windowHandler = func(n int) {
- t.updateWindow(s, uint32(n))
+ s.requestRead = func(n int) {
+ t.adjustWindow(s, uint32(n))
}
// The client side stream context should have exactly the same life cycle with the user provided context.
// That means, s.ctx should be read-only. And s.ctx is done iff ctx is done.
// So we use the original context here instead of creating a copy.
s.ctx = ctx
- s.dec = &recvBufferReader{
- ctx: s.ctx,
- goAway: s.goAway,
- recv: s.buf,
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{
+ ctx: s.ctx,
+ goAway: s.goAway,
+ recv: s.buf,
+ },
+ windowHandler: func(n int) {
+ t.updateWindow(s, uint32(n))
+ },
}
+
return s
}
-// NewStream creates a stream and register it into the transport as "active"
+// NewStream creates a stream and registers it into the transport as "active"
// streams.
func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Stream, err error) {
pr := &peer.Peer{
@@ -299,10 +341,13 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
if t.authInfo != nil {
pr.AuthInfo = t.authInfo
}
- userCtx := ctx
ctx = peer.NewContext(ctx, pr)
- authData := make(map[string]string)
- for _, c := range t.creds {
+ var (
+ authData = make(map[string]string)
+ audience string
+ )
+ // Create an audience string only if needed.
+ if len(t.creds) > 0 || callHdr.Creds != nil {
// Construct URI required to get auth request metadata.
var port string
if pos := strings.LastIndex(t.target, ":"); pos != -1 {
@@ -313,17 +358,39 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
}
pos := strings.LastIndex(callHdr.Method, "/")
if pos == -1 {
- return nil, streamErrorf(codes.InvalidArgument, "transport: malformed method name: %q", callHdr.Method)
+ pos = len(callHdr.Method)
}
- audience := "https://" + callHdr.Host + port + callHdr.Method[:pos]
+ audience = "https://" + callHdr.Host + port + callHdr.Method[:pos]
+ }
+ for _, c := range t.creds {
data, err := c.GetRequestMetadata(ctx, audience)
if err != nil {
- return nil, streamErrorf(codes.InvalidArgument, "transport: %v", err)
+ return nil, streamErrorf(codes.Internal, "transport: %v", err)
}
for k, v := range data {
+ // Capital header names are illegal in HTTP/2.
+ k = strings.ToLower(k)
authData[k] = v
}
}
+ callAuthData := make(map[string]string)
+ // Check if credentials.PerRPCCredentials were provided via call options.
+ // Note: if these credentials are provided both via dial options and call
+ // options, then both sets of credentials will be applied.
+ if callCreds := callHdr.Creds; callCreds != nil {
+ if !t.isSecure && callCreds.RequireTransportSecurity() {
+ return nil, streamErrorf(codes.Unauthenticated, "transport: cannot send secure credentials on an insecure conneciton")
+ }
+ data, err := callCreds.GetRequestMetadata(ctx, audience)
+ if err != nil {
+ return nil, streamErrorf(codes.Internal, "transport: %v", err)
+ }
+ for k, v := range data {
+ // Capital header names are illegal in HTTP/2
+ k = strings.ToLower(k)
+ callAuthData[k] = v
+ }
+ }
t.mu.Lock()
if t.activeStreams == nil {
t.mu.Unlock()
@@ -337,21 +404,18 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
t.mu.Unlock()
return nil, ErrConnClosing
}
- checkStreamsQuota := t.streamsQuota != nil
t.mu.Unlock()
- if checkStreamsQuota {
- sq, err := wait(ctx, nil, nil, t.shutdownChan, t.streamsQuota.acquire())
- if err != nil {
- return nil, err
- }
- // Returns the quota balance back.
- if sq > 1 {
- t.streamsQuota.add(sq - 1)
- }
+ sq, err := wait(ctx, nil, nil, t.shutdownChan, t.streamsQuota.acquire())
+ if err != nil {
+ return nil, err
+ }
+ // Returns the quota balance back.
+ if sq > 1 {
+ t.streamsQuota.add(sq - 1)
}
if _, err := wait(ctx, nil, nil, t.shutdownChan, t.writableChan); err != nil {
// Return the quota back now because there is no stream returned to the caller.
- if _, ok := err.(StreamError); ok && checkStreamsQuota {
+ if _, ok := err.(StreamError); ok {
t.streamsQuota.add(1)
}
return nil, err
@@ -359,9 +423,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
t.mu.Lock()
if t.state == draining {
t.mu.Unlock()
- if checkStreamsQuota {
- t.streamsQuota.add(1)
- }
+ t.streamsQuota.add(1)
// Need to make t writable again so that the rpc in flight can still proceed.
t.writableChan <- 0
return nil, ErrStreamDrain
@@ -371,19 +433,18 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
return nil, ErrConnClosing
}
s := t.newStream(ctx, callHdr)
- s.clientStatsCtx = userCtx
t.activeStreams[s.id] = s
-
- // This stream is not counted when applySetings(...) initialize t.streamsQuota.
- // Reset t.streamsQuota to the right value.
- var reset bool
- if !checkStreamsQuota && t.streamsQuota != nil {
- reset = true
+ // If the number of active streams change from 0 to 1, then check if keepalive
+ // has gone dormant. If so, wake it up.
+ if len(t.activeStreams) == 1 {
+ select {
+ case t.awakenKeepalive <- struct{}{}:
+ t.framer.writePing(false, false, [8]byte{})
+ default:
+ }
}
+
t.mu.Unlock()
- if reset {
- t.streamsQuota.add(-1)
- }
// HPACK encodes various headers. Note that once WriteField(...) is
// called, the corresponding headers/continuation frame has to be sent
@@ -407,33 +468,32 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
}
for k, v := range authData {
- // Capital header names are illegal in HTTP/2.
- k = strings.ToLower(k)
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: v})
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
+ }
+ for k, v := range callAuthData {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
var (
- hasMD bool
endHeaders bool
)
- if md, ok := metadata.FromContext(ctx); ok {
- hasMD = true
- for k, v := range md {
+ if md, ok := metadata.FromOutgoingContext(ctx); ok {
+ for k, vv := range md {
// HTTP doesn't allow you to set pseudoheaders after non pseudoheaders were set.
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
}
if md, ok := t.md.(*metadata.MD); ok {
- for k, v := range *md {
+ for k, vv := range *md {
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
}
@@ -448,7 +508,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
endHeaders = true
}
var flush bool
- if endHeaders && (hasMD || callHdr.Flush) {
+ if callHdr.Flush && endHeaders {
flush = true
}
if first {
@@ -473,6 +533,10 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
return nil, connectionErrorf(true, err, "transport: %v", err)
}
}
+ s.mu.Lock()
+ s.bytesSent = true
+ s.mu.Unlock()
+
if t.statsHandler != nil {
outHeader := &stats.OutHeader{
Client: true,
@@ -482,7 +546,7 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
LocalAddr: t.localAddr,
Compression: callHdr.SendCompress,
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, outHeader)
+ t.statsHandler.HandleRPC(s.ctx, outHeader)
}
t.writableChan <- 0
return s, nil
@@ -491,14 +555,14 @@ func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Strea
// CloseStream clears the footprint of a stream when the stream is not needed any more.
// This must not be executed in reader's goroutine.
func (t *http2Client) CloseStream(s *Stream, err error) {
- var updateStreams bool
t.mu.Lock()
if t.activeStreams == nil {
t.mu.Unlock()
return
}
- if t.streamsQuota != nil {
- updateStreams = true
+ if err != nil {
+ // notify in-flight streams, before the deletion
+ s.write(recvMsg{err: err})
}
delete(t.activeStreams, s.id)
if t.state == draining && len(t.activeStreams) == 0 {
@@ -508,15 +572,27 @@ func (t *http2Client) CloseStream(s *Stream, err error) {
return
}
t.mu.Unlock()
- if updateStreams {
- t.streamsQuota.add(1)
- }
- s.mu.Lock()
- if q := s.fc.resetPendingData(); q > 0 {
- if n := t.fc.onRead(q); n > 0 {
- t.controlBuf.put(&windowUpdate{0, n})
+ // rstStream is true in case the stream is being closed at the client-side
+ // and the server needs to be intimated about it by sending a RST_STREAM
+ // frame.
+ // To make sure this frame is written to the wire before the headers of the
+ // next stream waiting for streamsQuota, we add to streamsQuota pool only
+ // after having acquired the writableChan to send RST_STREAM out (look at
+ // the controller() routine).
+ var rstStream bool
+ var rstError http2.ErrCode
+ defer func() {
+ // In case, the client doesn't have to send RST_STREAM to server
+ // we can safely add back to streamsQuota pool now.
+ if !rstStream {
+ t.streamsQuota.add(1)
+ return
}
- }
+ t.controlBuf.put(&resetStream{s.id, rstError})
+ }()
+ s.mu.Lock()
+ rstStream = s.rstStream
+ rstError = s.rstError
if s.state == streamDone {
s.mu.Unlock()
return
@@ -527,8 +603,9 @@ func (t *http2Client) CloseStream(s *Stream, err error) {
}
s.state = streamDone
s.mu.Unlock()
- if se, ok := err.(StreamError); ok && se.Code != codes.DeadlineExceeded {
- t.controlBuf.put(&resetStream{s.id, http2.ErrCodeCancel})
+ if _, ok := err.(StreamError); ok {
+ rstStream = true
+ rstError = http2.ErrCodeCancel
}
}
@@ -584,24 +661,6 @@ func (t *http2Client) GracefulClose() error {
t.mu.Unlock()
return nil
}
- // Notify the streams which were initiated after the server sent GOAWAY.
- select {
- case <-t.goAway:
- n := t.prevGoAwayID
- if n == 0 && t.nextID > 1 {
- n = t.nextID - 2
- }
- m := t.goAwayID + 2
- if m == 2 {
- m = 1
- }
- for i := m; i <= n; i += 2 {
- if s, ok := t.activeStreams[i]; ok {
- close(s.goAway)
- }
- }
- default:
- }
if t.state == draining {
t.mu.Unlock()
return nil
@@ -621,9 +680,13 @@ func (t *http2Client) GracefulClose() error {
// if it improves the performance.
func (t *http2Client) Write(s *Stream, data []byte, opts *Options) error {
r := bytes.NewBuffer(data)
+ var (
+ p []byte
+ oqv uint32
+ )
for {
- var p []byte
- if r.Len() > 0 {
+ oqv = atomic.LoadUint32(&t.outQuotaVersion)
+ if r.Len() > 0 || p != nil {
size := http2MaxFrameLen
// Wait until the stream has some quota to send the data.
sq, err := wait(s.ctx, s.done, s.goAway, t.shutdownChan, s.sendQuotaPool.acquire())
@@ -641,7 +704,9 @@ func (t *http2Client) Write(s *Stream, data []byte, opts *Options) error {
if tq < size {
size = tq
}
- p = r.Next(size)
+ if p == nil {
+ p = r.Next(size)
+ }
ps := len(p)
if ps < sq {
// Overbooked stream quota. Return it back.
@@ -686,6 +751,18 @@ func (t *http2Client) Write(s *Stream, data []byte, opts *Options) error {
return ContextErr(s.ctx.Err())
default:
}
+ if oqv != atomic.LoadUint32(&t.outQuotaVersion) {
+ // InitialWindowSize settings frame must have been received after we
+ // acquired send quota but before we got the writable channel.
+ // We must forsake this write.
+ t.sendQuotaPool.add(len(p))
+ s.sendQuotaPool.add(len(p))
+ if t.framer.adjustNumWriters(-1) == 0 {
+ t.controlBuf.put(&flushIO{})
+ }
+ t.writableChan <- 0
+ continue
+ }
if r.Len() == 0 && t.framer.adjustNumWriters(0) == 1 {
// Do a force flush iff this is last frame for the entire gRPC message
// and the caller is the only writer at this moment.
@@ -698,6 +775,7 @@ func (t *http2Client) Write(s *Stream, data []byte, opts *Options) error {
t.notifyError(err)
return connectionErrorf(true, err, "transport: %v", err)
}
+ p = nil
if t.framer.adjustNumWriters(-1) == 0 {
t.framer.flushWrite()
}
@@ -724,6 +802,24 @@ func (t *http2Client) getStream(f http2.Frame) (*Stream, bool) {
return s, ok
}
+// adjustWindow sends out extra window update over the initial window size
+// of stream if the application is requesting data larger in size than
+// the window.
+func (t *http2Client) adjustWindow(s *Stream, n uint32) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if s.state == streamDone {
+ return
+ }
+ if w := s.fc.maybeAdjust(n); w > 0 {
+ // Piggyback conneciton's window update along.
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+}
+
// updateWindow adjusts the inbound quota for the stream and the transport.
// Window updates will deliver to the controller for sending when
// the cumulative quota exceeds the corresponding threshold.
@@ -733,55 +829,98 @@ func (t *http2Client) updateWindow(s *Stream, n uint32) {
if s.state == streamDone {
return
}
- if w := t.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
if w := s.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{s.id, w})
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
}
}
+// updateFlowControl updates the incoming flow control windows
+// for the transport and the stream based on the current bdp
+// estimation.
+func (t *http2Client) updateFlowControl(n uint32) {
+ t.mu.Lock()
+ for _, s := range t.activeStreams {
+ s.fc.newLimit(n)
+ }
+ t.initialWindowSize = int32(n)
+ t.mu.Unlock()
+ t.controlBuf.put(&windowUpdate{0, t.fc.newLimit(n), false})
+ t.controlBuf.put(&settings{
+ ack: false,
+ ss: []http2.Setting{
+ {
+ ID: http2.SettingInitialWindowSize,
+ Val: uint32(n),
+ },
+ },
+ })
+}
+
func (t *http2Client) handleData(f *http2.DataFrame) {
- size := len(f.Data())
- if err := t.fc.onData(uint32(size)); err != nil {
- t.notifyError(connectionErrorf(true, err, "%v", err))
- return
+ size := f.Header().Length
+ var sendBDPPing bool
+ if t.bdpEst != nil {
+ sendBDPPing = t.bdpEst.add(uint32(size))
+ }
+ // Decouple connection's flow control from application's read.
+ // An update on connection's flow control should not depend on
+ // whether user application has read the data or not. Such a
+ // restriction is already imposed on the stream's flow control,
+ // and therefore the sender will be blocked anyways.
+ // Decoupling the connection flow control will prevent other
+ // active(fast) streams from starving in presence of slow or
+ // inactive streams.
+ //
+ // Furthermore, if a bdpPing is being sent out we can piggyback
+ // connection's window update for the bytes we just received.
+ if sendBDPPing {
+ t.controlBuf.put(&windowUpdate{0, uint32(size), false})
+ t.controlBuf.put(bdpPing)
+ } else {
+ if err := t.fc.onData(uint32(size)); err != nil {
+ t.notifyError(connectionErrorf(true, err, "%v", err))
+ return
+ }
+ if w := t.fc.onRead(uint32(size)); w > 0 {
+ t.controlBuf.put(&windowUpdate{0, w, true})
+ }
}
// Select the right stream to dispatch.
s, ok := t.getStream(f)
if !ok {
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if size > 0 {
s.mu.Lock()
if s.state == streamDone {
s.mu.Unlock()
- // The stream has been closed. Release the corresponding quota.
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if err := s.fc.onData(uint32(size)); err != nil {
- s.state = streamDone
- s.statusCode = codes.Internal
- s.statusDesc = err.Error()
- close(s.done)
+ s.rstStream = true
+ s.rstError = http2.ErrCodeFlowControl
+ s.finish(status.New(codes.Internal, err.Error()))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
- t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
return
}
+ if f.Header().Flags.Has(http2.FlagDataPadded) {
+ if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 {
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+ }
s.mu.Unlock()
// TODO(bradfitz, zhaoq): A copy is required here because there is no
// guarantee f.Data() is consumed before the arrival of next frame.
// Can this copy be eliminated?
- data := make([]byte, size)
- copy(data, f.Data())
- s.write(recvMsg{data: data})
+ if len(f.Data()) > 0 {
+ data := make([]byte, len(f.Data()))
+ copy(data, f.Data())
+ s.write(recvMsg{data: data})
+ }
}
// The server has closed the stream without sending trailers. Record that
// the read direction is closed, and set the status appropriately.
@@ -791,10 +930,7 @@ func (t *http2Client) handleData(f *http2.DataFrame) {
s.mu.Unlock()
return
}
- s.state = streamDone
- s.statusCode = codes.Internal
- s.statusDesc = "server closed the stream without sending trailers"
- close(s.done)
+ s.finish(status.New(codes.Internal, "server closed the stream without sending trailers"))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -810,18 +946,16 @@ func (t *http2Client) handleRSTStream(f *http2.RSTStreamFrame) {
s.mu.Unlock()
return
}
- s.state = streamDone
if !s.headerDone {
close(s.headerChan)
s.headerDone = true
}
- s.statusCode, ok = http2ErrConvTab[http2.ErrCode(f.ErrCode)]
+ statusCode, ok := http2ErrConvTab[http2.ErrCode(f.ErrCode)]
if !ok {
- grpclog.Println("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error ", f.ErrCode)
- s.statusCode = codes.Unknown
+ warningf("transport: http2Client.handleRSTStream found no mapped gRPC status for the received http2 error %v", f.ErrCode)
+ statusCode = codes.Unknown
}
- s.statusDesc = fmt.Sprintf("stream terminated by RST_STREAM with error code: %d", f.ErrCode)
- close(s.done)
+ s.finish(status.Newf(statusCode, "stream terminated by RST_STREAM with error code: %d", f.ErrCode))
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -840,7 +974,11 @@ func (t *http2Client) handleSettings(f *http2.SettingsFrame) {
}
func (t *http2Client) handlePing(f *http2.PingFrame) {
- if f.IsAck() { // Do nothing.
+ if f.IsAck() {
+ // Maybe it's a BDP ping.
+ if t.bdpEst != nil {
+ t.bdpEst.calculate(f.Data)
+ }
return
}
pingAck := &ping{ack: true}
@@ -850,31 +988,75 @@ func (t *http2Client) handlePing(f *http2.PingFrame) {
func (t *http2Client) handleGoAway(f *http2.GoAwayFrame) {
t.mu.Lock()
- if t.state == reachable || t.state == draining {
- if f.LastStreamID > 0 && f.LastStreamID%2 != 1 {
- t.mu.Unlock()
- t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: stream ID %d is even", f.LastStreamID))
- return
- }
- select {
- case <-t.goAway:
- id := t.goAwayID
- // t.goAway has been closed (i.e.,multiple GoAways).
- if id < f.LastStreamID {
- t.mu.Unlock()
- t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: previously recv GOAWAY frame with LastStramID %d, currently recv %d", id, f.LastStreamID))
- return
- }
- t.prevGoAwayID = id
- t.goAwayID = f.LastStreamID
+ if t.state != reachable && t.state != draining {
+ t.mu.Unlock()
+ return
+ }
+ if f.ErrCode == http2.ErrCodeEnhanceYourCalm {
+ infof("Client received GoAway with http2.ErrCodeEnhanceYourCalm.")
+ }
+ id := f.LastStreamID
+ if id > 0 && id%2 != 1 {
+ t.mu.Unlock()
+ t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: stream ID %d is even", f.LastStreamID))
+ return
+ }
+ // A client can recieve multiple GoAways from server (look at https://github.com/grpc/grpc-go/issues/1387).
+ // The idea is that the first GoAway will be sent with an ID of MaxInt32 and the second GoAway will be sent after an RTT delay
+ // with the ID of the last stream the server will process.
+ // Therefore, when we get the first GoAway we don't really close any streams. While in case of second GoAway we
+ // close all streams created after the second GoAwayId. This way streams that were in-flight while the GoAway from server
+ // was being sent don't get killed.
+ select {
+ case <-t.goAway: // t.goAway has been closed (i.e.,multiple GoAways).
+ // If there are multiple GoAways the first one should always have an ID greater than the following ones.
+ if id > t.prevGoAwayID {
t.mu.Unlock()
+ t.notifyError(connectionErrorf(true, nil, "received illegal http2 GOAWAY frame: previously recv GOAWAY frame with LastStramID %d, currently recv %d", id, f.LastStreamID))
return
- default:
}
- t.goAwayID = f.LastStreamID
+ default:
+ t.setGoAwayReason(f)
close(t.goAway)
+ t.state = draining
+ }
+ // All streams with IDs greater than the GoAwayId
+ // and smaller than the previous GoAway ID should be killed.
+ upperLimit := t.prevGoAwayID
+ if upperLimit == 0 { // This is the first GoAway Frame.
+ upperLimit = math.MaxUint32 // Kill all streams after the GoAway ID.
}
+ for streamID, stream := range t.activeStreams {
+ if streamID > id && streamID <= upperLimit {
+ close(stream.goAway)
+ }
+ }
+ t.prevGoAwayID = id
+ active := len(t.activeStreams)
t.mu.Unlock()
+ if active == 0 {
+ t.Close()
+ }
+}
+
+// setGoAwayReason sets the value of t.goAwayReason based
+// on the GoAway frame received.
+// It expects a lock on transport's mutext to be held by
+// the caller.
+func (t *http2Client) setGoAwayReason(f *http2.GoAwayFrame) {
+ t.goAwayReason = NoReason
+ switch f.ErrCode {
+ case http2.ErrCodeEnhanceYourCalm:
+ if string(f.DebugData()) == "too_many_pings" {
+ t.goAwayReason = TooManyPings
+ }
+ }
+}
+
+func (t *http2Client) GetGoAwayReason() GoAwayReason {
+ t.mu.Lock()
+ defer t.mu.Unlock()
+ return t.goAwayReason
}
func (t *http2Client) handleWindowUpdate(f *http2.WindowUpdateFrame) {
@@ -895,18 +1077,18 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
if !ok {
return
}
+ s.mu.Lock()
+ s.bytesReceived = true
+ s.mu.Unlock()
var state decodeState
- for _, hf := range frame.Fields {
- state.processHeaderField(hf)
- }
- if state.err != nil {
+ if err := state.decodeResponseHeader(frame); err != nil {
s.mu.Lock()
if !s.headerDone {
close(s.headerChan)
s.headerDone = true
}
s.mu.Unlock()
- s.write(recvMsg{err: state.err})
+ s.write(recvMsg{err: err})
// Something wrong. Stops reading even when there is remaining.
return
}
@@ -920,13 +1102,13 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
Client: true,
WireLength: int(frame.Header().Length),
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, inHeader)
+ t.statsHandler.HandleRPC(s.ctx, inHeader)
} else {
inTrailer := &stats.InTrailer{
Client: true,
WireLength: int(frame.Header().Length),
}
- t.statsHandler.HandleRPC(s.clientStatsCtx, inTrailer)
+ t.statsHandler.HandleRPC(s.ctx, inTrailer)
}
}
}()
@@ -951,10 +1133,7 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) {
if len(state.mdata) > 0 {
s.trailer = state.mdata
}
- s.statusCode = state.statusCode
- s.statusDesc = state.statusDesc
- close(s.done)
- s.state = streamDone
+ s.finish(state.status())
s.mu.Unlock()
s.write(recvMsg{err: io.EOF})
}
@@ -982,6 +1161,7 @@ func (t *http2Client) reader() {
t.notifyError(err)
return
}
+ atomic.CompareAndSwapUint32(&t.activity, 0, 1)
sf, ok := frame.(*http2.SettingsFrame)
if !ok {
t.notifyError(err)
@@ -992,6 +1172,7 @@ func (t *http2Client) reader() {
// loop to keep reading incoming messages on this transport.
for {
frame, err := t.framer.readFrame()
+ atomic.CompareAndSwapUint32(&t.activity, 0, 1)
if err != nil {
// Abort an active stream if the http2.Framer returns a
// http2.StreamError. This can happen only if the server's response
@@ -1027,7 +1208,7 @@ func (t *http2Client) reader() {
case *http2.WindowUpdateFrame:
t.handleWindowUpdate(frame)
default:
- grpclog.Printf("transport: http2Client.reader got unhandled frame type %v.", frame)
+ errorf("transport: http2Client.reader got unhandled frame type %v.", frame)
}
}
}
@@ -1043,24 +1224,19 @@ func (t *http2Client) applySettings(ss []http2.Setting) {
s.Val = math.MaxInt32
}
t.mu.Lock()
- reset := t.streamsQuota != nil
- if !reset {
- t.streamsQuota = newQuotaPool(int(s.Val) - len(t.activeStreams))
- }
ms := t.maxStreams
t.maxStreams = int(s.Val)
t.mu.Unlock()
- if reset {
- t.streamsQuota.add(int(s.Val) - ms)
- }
+ t.streamsQuota.add(int(s.Val) - ms)
case http2.SettingInitialWindowSize:
t.mu.Lock()
for _, stream := range t.activeStreams {
// Adjust the sending quota for each stream.
- stream.sendQuotaPool.add(int(s.Val - t.streamSendQuota))
+ stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota))
}
t.streamSendQuota = s.Val
t.mu.Unlock()
+ atomic.AddUint32(&t.outQuotaVersion, 1)
}
}
}
@@ -1076,7 +1252,7 @@ func (t *http2Client) controller() {
case <-t.writableChan:
switch i := i.(type) {
case *windowUpdate:
- t.framer.writeWindowUpdate(true, i.streamID, i.increment)
+ t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment)
case *settings:
if i.ack {
t.framer.writeSettingsAck(true)
@@ -1085,13 +1261,22 @@ func (t *http2Client) controller() {
t.framer.writeSettings(true, i.ss...)
}
case *resetStream:
+ // If the server needs to be to intimated about stream closing,
+ // then we need to make sure the RST_STREAM frame is written to
+ // the wire before the headers of the next stream waiting on
+ // streamQuota. We ensure this by adding to the streamsQuota pool
+ // only after having acquired the writableChan to send RST_STREAM.
+ t.streamsQuota.add(1)
t.framer.writeRSTStream(true, i.streamID, i.code)
case *flushIO:
t.framer.flushWrite()
case *ping:
+ if !i.ack {
+ t.bdpEst.timesnap(i.data)
+ }
t.framer.writePing(true, i.ack, i.data)
default:
- grpclog.Printf("transport: http2Client.controller got unexpected item type %v\n", i)
+ errorf("transport: http2Client.controller got unexpected item type %v\n", i)
}
t.writableChan <- 0
continue
@@ -1104,6 +1289,61 @@ func (t *http2Client) controller() {
}
}
+// keepalive running in a separate goroutune makes sure the connection is alive by sending pings.
+func (t *http2Client) keepalive() {
+ p := &ping{data: [8]byte{}}
+ timer := time.NewTimer(t.kp.Time)
+ for {
+ select {
+ case <-timer.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ timer.Reset(t.kp.Time)
+ continue
+ }
+ // Check if keepalive should go dormant.
+ t.mu.Lock()
+ if len(t.activeStreams) < 1 && !t.kp.PermitWithoutStream {
+ // Make awakenKeepalive writable.
+ <-t.awakenKeepalive
+ t.mu.Unlock()
+ select {
+ case <-t.awakenKeepalive:
+ // If the control gets here a ping has been sent
+ // need to reset the timer with keepalive.Timeout.
+ case <-t.shutdownChan:
+ return
+ }
+ } else {
+ t.mu.Unlock()
+ // Send ping.
+ t.controlBuf.put(p)
+ }
+
+ // By the time control gets here a ping has been sent one way or the other.
+ timer.Reset(t.kp.Timeout)
+ select {
+ case <-timer.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ timer.Reset(t.kp.Time)
+ continue
+ }
+ t.Close()
+ return
+ case <-t.shutdownChan:
+ if !timer.Stop() {
+ <-timer.C
+ }
+ return
+ }
+ case <-t.shutdownChan:
+ if !timer.Stop() {
+ <-timer.C
+ }
+ return
+ }
+ }
+}
+
func (t *http2Client) Error() <-chan struct{} {
return t.errorChan
}
@@ -1123,7 +1363,7 @@ func (t *http2Client) notifyError(err error) {
if t.state == reachable {
t.state = unreachable
close(t.errorChan)
- grpclog.Printf("transport: http2Client.notifyError got notified that the client transport was broken %v.", err)
+ infof("transport: http2Client.notifyError got notified that the client transport was broken %v.", err)
}
t.mu.Unlock()
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_server.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_server.go
index a095dd0e..b6f93e3c 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_server.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http2_server.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -38,19 +23,24 @@ import (
"errors"
"io"
"math"
+ "math/rand"
"net"
"strconv"
"sync"
+ "sync/atomic"
+ "time"
+ "github.com/golang/protobuf/proto"
"golang.org/x/net/context"
"golang.org/x/net/http2"
"golang.org/x/net/http2/hpack"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
- "google.golang.org/grpc/grpclog"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
)
@@ -78,23 +68,54 @@ type http2Server struct {
framer *framer
hBuf *bytes.Buffer // the buffer for HPACK encoding
hEnc *hpack.Encoder // HPACK encoder
-
// The max number of concurrent streams.
maxStreams uint32
// controlBuf delivers all the control related tasks (e.g., window
// updates, reset streams, and various settings) to the controller.
- controlBuf *recvBuffer
+ controlBuf *controlBuffer
fc *inFlow
// sendQuotaPool provides flow control to outbound message.
sendQuotaPool *quotaPool
+ stats stats.Handler
+ // Flag to keep track of reading activity on transport.
+ // 1 is true and 0 is false.
+ activity uint32 // Accessed atomically.
+ // Keepalive and max-age parameters for the server.
+ kp keepalive.ServerParameters
+
+ // Keepalive enforcement policy.
+ kep keepalive.EnforcementPolicy
+ // The time instance last ping was received.
+ lastPingAt time.Time
+ // Number of times the client has violated keepalive ping policy so far.
+ pingStrikes uint8
+ // Flag to signify that number of ping strikes should be reset to 0.
+ // This is set whenever data or header frames are sent.
+ // 1 means yes.
+ resetPingStrikes uint32 // Accessed atomically.
+ initialWindowSize int32
+ bdpEst *bdpEstimator
- stats stats.Handler
+ outQuotaVersion uint32
- mu sync.Mutex // guard the following
+ mu sync.Mutex // guard the following
+
+ // drainChan is initialized when drain(...) is called the first time.
+ // After which the server writes out the first GoAway(with ID 2^31-1) frame.
+ // Then an independent goroutine will be launched to later send the second GoAway.
+ // During this time we don't want to write another first GoAway(with ID 2^31 -1) frame.
+ // Thus call to drain(...) will be a no-op if drainChan is already initialized since draining is
+ // already underway.
+ drainChan chan struct{}
state transportState
activeStreams map[uint32]*Stream
// the per-stream outbound flow control window size set by the peer.
streamSendQuota uint32
+ // idle is the time instant when the connection went idle.
+ // This is either the begining of the connection or when the number of
+ // RPCs go down to 0.
+ // When the connection is busy, this value is set to 0.
+ idle time.Time
}
// newHTTP2Server constructs a ServerTransport based on HTTP2. ConnectionError is
@@ -102,53 +123,96 @@ type http2Server struct {
func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err error) {
framer := newFramer(conn)
// Send initial settings as connection preface to client.
- var settings []http2.Setting
+ var isettings []http2.Setting
// TODO(zhaoq): Have a better way to signal "no limit" because 0 is
// permitted in the HTTP2 spec.
maxStreams := config.MaxStreams
if maxStreams == 0 {
maxStreams = math.MaxUint32
} else {
- settings = append(settings, http2.Setting{
+ isettings = append(isettings, http2.Setting{
ID: http2.SettingMaxConcurrentStreams,
Val: maxStreams,
})
}
- if initialWindowSize != defaultWindowSize {
- settings = append(settings, http2.Setting{
+ dynamicWindow := true
+ iwz := int32(initialWindowSize)
+ if config.InitialWindowSize >= defaultWindowSize {
+ iwz = config.InitialWindowSize
+ dynamicWindow = false
+ }
+ icwz := int32(initialWindowSize)
+ if config.InitialConnWindowSize >= defaultWindowSize {
+ icwz = config.InitialConnWindowSize
+ dynamicWindow = false
+ }
+ if iwz != defaultWindowSize {
+ isettings = append(isettings, http2.Setting{
ID: http2.SettingInitialWindowSize,
- Val: uint32(initialWindowSize)})
+ Val: uint32(iwz)})
}
- if err := framer.writeSettings(true, settings...); err != nil {
+ if err := framer.writeSettings(true, isettings...); err != nil {
return nil, connectionErrorf(true, err, "transport: %v", err)
}
// Adjust the connection flow control window if needed.
- if delta := uint32(initialConnWindowSize - defaultWindowSize); delta > 0 {
+ if delta := uint32(icwz - defaultWindowSize); delta > 0 {
if err := framer.writeWindowUpdate(true, 0, delta); err != nil {
return nil, connectionErrorf(true, err, "transport: %v", err)
}
}
+ kp := config.KeepaliveParams
+ if kp.MaxConnectionIdle == 0 {
+ kp.MaxConnectionIdle = defaultMaxConnectionIdle
+ }
+ if kp.MaxConnectionAge == 0 {
+ kp.MaxConnectionAge = defaultMaxConnectionAge
+ }
+ // Add a jitter to MaxConnectionAge.
+ kp.MaxConnectionAge += getJitter(kp.MaxConnectionAge)
+ if kp.MaxConnectionAgeGrace == 0 {
+ kp.MaxConnectionAgeGrace = defaultMaxConnectionAgeGrace
+ }
+ if kp.Time == 0 {
+ kp.Time = defaultServerKeepaliveTime
+ }
+ if kp.Timeout == 0 {
+ kp.Timeout = defaultServerKeepaliveTimeout
+ }
+ kep := config.KeepalivePolicy
+ if kep.MinTime == 0 {
+ kep.MinTime = defaultKeepalivePolicyMinTime
+ }
var buf bytes.Buffer
t := &http2Server{
- ctx: context.Background(),
- conn: conn,
- remoteAddr: conn.RemoteAddr(),
- localAddr: conn.LocalAddr(),
- authInfo: config.AuthInfo,
- framer: framer,
- hBuf: &buf,
- hEnc: hpack.NewEncoder(&buf),
- maxStreams: maxStreams,
- inTapHandle: config.InTapHandle,
- controlBuf: newRecvBuffer(),
- fc: &inFlow{limit: initialConnWindowSize},
- sendQuotaPool: newQuotaPool(defaultWindowSize),
- state: reachable,
- writableChan: make(chan int, 1),
- shutdownChan: make(chan struct{}),
- activeStreams: make(map[uint32]*Stream),
- streamSendQuota: defaultWindowSize,
- stats: config.StatsHandler,
+ ctx: context.Background(),
+ conn: conn,
+ remoteAddr: conn.RemoteAddr(),
+ localAddr: conn.LocalAddr(),
+ authInfo: config.AuthInfo,
+ framer: framer,
+ hBuf: &buf,
+ hEnc: hpack.NewEncoder(&buf),
+ maxStreams: maxStreams,
+ inTapHandle: config.InTapHandle,
+ controlBuf: newControlBuffer(),
+ fc: &inFlow{limit: uint32(icwz)},
+ sendQuotaPool: newQuotaPool(defaultWindowSize),
+ state: reachable,
+ writableChan: make(chan int, 1),
+ shutdownChan: make(chan struct{}),
+ activeStreams: make(map[uint32]*Stream),
+ streamSendQuota: defaultWindowSize,
+ stats: config.StatsHandler,
+ kp: kp,
+ idle: time.Now(),
+ kep: kep,
+ initialWindowSize: iwz,
+ }
+ if dynamicWindow {
+ t.bdpEst = &bdpEstimator{
+ bdp: initialWindowSize,
+ updateFlowControl: t.updateFlowControl,
+ }
}
if t.stats != nil {
t.ctx = t.stats.TagConn(t.ctx, &stats.ConnTagInfo{
@@ -159,6 +223,7 @@ func newHTTP2Server(conn net.Conn, config *ServerConfig) (_ ServerTransport, err
t.stats.HandleConn(t.ctx, connBegin)
}
go t.controller()
+ go t.keepalive()
t.writableChan <- 0
return t, nil
}
@@ -170,18 +235,17 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
id: frame.Header().StreamID,
st: t,
buf: buf,
- fc: &inFlow{limit: initialWindowSize},
+ fc: &inFlow{limit: uint32(t.initialWindowSize)},
}
var state decodeState
for _, hf := range frame.Fields {
- state.processHeaderField(hf)
- }
- if err := state.err; err != nil {
- if se, ok := err.(StreamError); ok {
- t.controlBuf.put(&resetStream{s.id, statusCodeConvTab[se.Code]})
+ if err := state.processHeaderField(hf); err != nil {
+ if se, ok := err.(StreamError); ok {
+ t.controlBuf.put(&resetStream{s.id, statusCodeConvTab[se.Code]})
+ }
+ return
}
- return
}
if frame.StreamEnded() {
@@ -208,12 +272,16 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
s.ctx = newContextWithStream(s.ctx, s)
// Attach the received metadata to the context.
if len(state.mdata) > 0 {
- s.ctx = metadata.NewContext(s.ctx, state.mdata)
+ s.ctx = metadata.NewIncomingContext(s.ctx, state.mdata)
}
-
- s.dec = &recvBufferReader{
- ctx: s.ctx,
- recv: s.buf,
+ s.trReader = &transportReader{
+ reader: &recvBufferReader{
+ ctx: s.ctx,
+ recv: s.buf,
+ },
+ windowHandler: func(n int) {
+ t.updateWindow(s, uint32(n))
+ },
}
s.recvCompress = state.encoding
s.method = state.method
@@ -224,7 +292,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
}
s.ctx, err = t.inTapHandle(s.ctx, info)
if err != nil {
- // TODO: Log the real error.
+ warningf("transport: http2Server.operateHeaders got an error from InTapHandle: %v", err)
t.controlBuf.put(&resetStream{s.id, http2.ErrCodeRefusedStream})
return
}
@@ -242,15 +310,18 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(
if s.id%2 != 1 || s.id <= t.maxStreamID {
t.mu.Unlock()
// illegal gRPC stream id.
- grpclog.Println("transport: http2Server.HandleStreams received an illegal stream id: ", s.id)
+ errorf("transport: http2Server.HandleStreams received an illegal stream id: %v", s.id)
return true
}
t.maxStreamID = s.id
s.sendQuotaPool = newQuotaPool(int(t.streamSendQuota))
t.activeStreams[s.id] = s
+ if len(t.activeStreams) == 1 {
+ t.idle = time.Time{}
+ }
t.mu.Unlock()
- s.windowHandler = func(n int) {
- t.updateWindow(s, uint32(n))
+ s.requestRead = func(n int) {
+ t.adjustWindow(s, uint32(n))
}
s.ctx = traceCtx(s.ctx, s.method)
if t.stats != nil {
@@ -275,12 +346,15 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
// Check the validity of client preface.
preface := make([]byte, len(clientPreface))
if _, err := io.ReadFull(t.conn, preface); err != nil {
- grpclog.Printf("transport: http2Server.HandleStreams failed to receive the preface from client: %v", err)
+ // Only log if it isn't a simple tcp accept check (ie: tcp balancer doing open/close socket)
+ if err != io.EOF {
+ errorf("transport: http2Server.HandleStreams failed to receive the preface from client: %v", err)
+ }
t.Close()
return
}
if !bytes.Equal(preface, clientPreface) {
- grpclog.Printf("transport: http2Server.HandleStreams received bogus greeting from client: %q", preface)
+ errorf("transport: http2Server.HandleStreams received bogus greeting from client: %q", preface)
t.Close()
return
}
@@ -291,13 +365,14 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
return
}
if err != nil {
- grpclog.Printf("transport: http2Server.HandleStreams failed to read frame: %v", err)
+ errorf("transport: http2Server.HandleStreams failed to read initial settings frame: %v", err)
t.Close()
return
}
+ atomic.StoreUint32(&t.activity, 1)
sf, ok := frame.(*http2.SettingsFrame)
if !ok {
- grpclog.Printf("transport: http2Server.HandleStreams saw invalid preface type %T from client", frame)
+ errorf("transport: http2Server.HandleStreams saw invalid preface type %T from client", frame)
t.Close()
return
}
@@ -305,6 +380,7 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
for {
frame, err := t.framer.readFrame()
+ atomic.StoreUint32(&t.activity, 1)
if err != nil {
if se, ok := err.(http2.StreamError); ok {
t.mu.Lock()
@@ -320,7 +396,7 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
t.Close()
return
}
- grpclog.Printf("transport: http2Server.HandleStreams failed to read frame: %v", err)
+ warningf("transport: http2Server.HandleStreams failed to read frame: %v", err)
t.Close()
return
}
@@ -343,7 +419,7 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.
case *http2.GoAwayFrame:
// TODO: Handle GoAway from the client appropriately.
default:
- grpclog.Printf("transport: http2Server.HandleStreams found unhandled frame type %v.", frame)
+ errorf("transport: http2Server.HandleStreams found unhandled frame type %v.", frame)
}
}
}
@@ -363,6 +439,23 @@ func (t *http2Server) getStream(f http2.Frame) (*Stream, bool) {
return s, true
}
+// adjustWindow sends out extra window update over the initial window size
+// of stream if the application is requesting data larger in size than
+// the window.
+func (t *http2Server) adjustWindow(s *Stream, n uint32) {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ if s.state == streamDone {
+ return
+ }
+ if w := s.fc.maybeAdjust(n); w > 0 {
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+}
+
// updateWindow adjusts the inbound quota for the stream and the transport.
// Window updates will deliver to the controller for sending when
// the cumulative quota exceeds the corresponding threshold.
@@ -372,37 +465,76 @@ func (t *http2Server) updateWindow(s *Stream, n uint32) {
if s.state == streamDone {
return
}
- if w := t.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
if w := s.fc.onRead(n); w > 0 {
- t.controlBuf.put(&windowUpdate{s.id, w})
+ if cw := t.fc.resetPendingUpdate(); cw > 0 {
+ t.controlBuf.put(&windowUpdate{0, cw, false})
+ }
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
}
}
+// updateFlowControl updates the incoming flow control windows
+// for the transport and the stream based on the current bdp
+// estimation.
+func (t *http2Server) updateFlowControl(n uint32) {
+ t.mu.Lock()
+ for _, s := range t.activeStreams {
+ s.fc.newLimit(n)
+ }
+ t.initialWindowSize = int32(n)
+ t.mu.Unlock()
+ t.controlBuf.put(&windowUpdate{0, t.fc.newLimit(n), false})
+ t.controlBuf.put(&settings{
+ ack: false,
+ ss: []http2.Setting{
+ {
+ ID: http2.SettingInitialWindowSize,
+ Val: uint32(n),
+ },
+ },
+ })
+
+}
+
func (t *http2Server) handleData(f *http2.DataFrame) {
- size := len(f.Data())
- if err := t.fc.onData(uint32(size)); err != nil {
- grpclog.Printf("transport: http2Server %v", err)
- t.Close()
- return
+ size := f.Header().Length
+ var sendBDPPing bool
+ if t.bdpEst != nil {
+ sendBDPPing = t.bdpEst.add(uint32(size))
+ }
+ // Decouple connection's flow control from application's read.
+ // An update on connection's flow control should not depend on
+ // whether user application has read the data or not. Such a
+ // restriction is already imposed on the stream's flow control,
+ // and therefore the sender will be blocked anyways.
+ // Decoupling the connection flow control will prevent other
+ // active(fast) streams from starving in presence of slow or
+ // inactive streams.
+ //
+ // Furthermore, if a bdpPing is being sent out we can piggyback
+ // connection's window update for the bytes we just received.
+ if sendBDPPing {
+ t.controlBuf.put(&windowUpdate{0, uint32(size), false})
+ t.controlBuf.put(bdpPing)
+ } else {
+ if err := t.fc.onData(uint32(size)); err != nil {
+ errorf("transport: http2Server %v", err)
+ t.Close()
+ return
+ }
+ if w := t.fc.onRead(uint32(size)); w > 0 {
+ t.controlBuf.put(&windowUpdate{0, w, true})
+ }
}
// Select the right stream to dispatch.
s, ok := t.getStream(f)
if !ok {
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if size > 0 {
s.mu.Lock()
if s.state == streamDone {
s.mu.Unlock()
- // The stream has been closed. Release the corresponding quota.
- if w := t.fc.onRead(uint32(size)); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
return
}
if err := s.fc.onData(uint32(size)); err != nil {
@@ -411,13 +543,20 @@ func (t *http2Server) handleData(f *http2.DataFrame) {
t.controlBuf.put(&resetStream{s.id, http2.ErrCodeFlowControl})
return
}
+ if f.Header().Flags.Has(http2.FlagDataPadded) {
+ if w := s.fc.onRead(uint32(size) - uint32(len(f.Data()))); w > 0 {
+ t.controlBuf.put(&windowUpdate{s.id, w, true})
+ }
+ }
s.mu.Unlock()
// TODO(bradfitz, zhaoq): A copy is required here because there is no
// guarantee f.Data() is consumed before the arrival of next frame.
// Can this copy be eliminated?
- data := make([]byte, size)
- copy(data, f.Data())
- s.write(recvMsg{data: data})
+ if len(f.Data()) > 0 {
+ data := make([]byte, len(f.Data()))
+ copy(data, f.Data())
+ s.write(recvMsg{data: data})
+ }
}
if f.Header().Flags.Has(http2.FlagDataEndStream) {
// Received the end of stream from the client.
@@ -451,13 +590,58 @@ func (t *http2Server) handleSettings(f *http2.SettingsFrame) {
t.controlBuf.put(&settings{ack: true, ss: ss})
}
+const (
+ maxPingStrikes = 2
+ defaultPingTimeout = 2 * time.Hour
+)
+
func (t *http2Server) handlePing(f *http2.PingFrame) {
- if f.IsAck() { // Do nothing.
+ if f.IsAck() {
+ if f.Data == goAwayPing.data && t.drainChan != nil {
+ close(t.drainChan)
+ return
+ }
+ // Maybe it's a BDP ping.
+ if t.bdpEst != nil {
+ t.bdpEst.calculate(f.Data)
+ }
return
}
pingAck := &ping{ack: true}
copy(pingAck.data[:], f.Data[:])
t.controlBuf.put(pingAck)
+
+ now := time.Now()
+ defer func() {
+ t.lastPingAt = now
+ }()
+ // A reset ping strikes means that we don't need to check for policy
+ // violation for this ping and the pingStrikes counter should be set
+ // to 0.
+ if atomic.CompareAndSwapUint32(&t.resetPingStrikes, 1, 0) {
+ t.pingStrikes = 0
+ return
+ }
+ t.mu.Lock()
+ ns := len(t.activeStreams)
+ t.mu.Unlock()
+ if ns < 1 && !t.kep.PermitWithoutStream {
+ // Keepalive shouldn't be active thus, this new ping should
+ // have come after atleast defaultPingTimeout.
+ if t.lastPingAt.Add(defaultPingTimeout).After(now) {
+ t.pingStrikes++
+ }
+ } else {
+ // Check if keepalive policy is respected.
+ if t.lastPingAt.Add(t.kep.MinTime).After(now) {
+ t.pingStrikes++
+ }
+ }
+
+ if t.pingStrikes > maxPingStrikes {
+ // Send goaway and close the connection.
+ t.controlBuf.put(&goAway{code: http2.ErrCodeEnhanceYourCalm, debugData: []byte("too_many_pings"), closeConn: true})
+ }
}
func (t *http2Server) handleWindowUpdate(f *http2.WindowUpdateFrame) {
@@ -476,6 +660,13 @@ func (t *http2Server) writeHeaders(s *Stream, b *bytes.Buffer, endStream bool) e
first := true
endHeaders := false
var err error
+ defer func() {
+ if err == nil {
+ // Reset ping strikes when seding headers since that might cause the
+ // peer to send ping.
+ atomic.StoreUint32(&t.resetPingStrikes, 1)
+ }
+ }()
// Sends the headers in a single batch.
for !endHeaders {
size := t.hBuf.Len()
@@ -530,13 +721,13 @@ func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error {
if s.sendCompress != "" {
t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-encoding", Value: s.sendCompress})
}
- for k, v := range md {
+ for k, vv := range md {
if isReservedHeader(k) {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
bufLen := t.hBuf.Len()
@@ -557,7 +748,7 @@ func (t *http2Server) WriteHeader(s *Stream, md metadata.MD) error {
// There is no further I/O operations being able to perform on this stream.
// TODO(zhaoq): Now it indicates the end of entire stream. Revisit if early
// OK is adopted.
-func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error {
+func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error {
var headersSent, hasHeader bool
s.mu.Lock()
if s.state == streamDone {
@@ -588,17 +779,28 @@ func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc s
t.hEnc.WriteField(
hpack.HeaderField{
Name: "grpc-status",
- Value: strconv.Itoa(int(statusCode)),
+ Value: strconv.Itoa(int(st.Code())),
})
- t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(statusDesc)})
+ t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())})
+
+ if p := st.Proto(); p != nil && len(p.Details) > 0 {
+ stBytes, err := proto.Marshal(p)
+ if err != nil {
+ // TODO: return error instead, when callers are able to handle it.
+ panic(err)
+ }
+
+ t.hEnc.WriteField(hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)})
+ }
+
// Attach the trailer metadata.
- for k, v := range s.trailer {
+ for k, vv := range s.trailer {
// Clients don't tolerate reading restricted headers after some non restricted ones were sent.
if isReservedHeader(k) {
continue
}
- for _, entry := range v {
- t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: entry})
+ for _, v := range vv {
+ t.hEnc.WriteField(hpack.HeaderField{Name: k, Value: encodeMetadataHeader(k, v)})
}
}
bufLen := t.hBuf.Len()
@@ -619,7 +821,7 @@ func (t *http2Server) WriteStatus(s *Stream, statusCode codes.Code, statusDesc s
// Write converts the data into HTTP2 data frame and sends it out. Non-nil error
// is returns if it fails (e.g., framing error, transport error).
-func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
+func (t *http2Server) Write(s *Stream, data []byte, opts *Options) (err error) {
// TODO(zhaoq): Support multi-writers for a single stream.
var writeHeaderFrame bool
s.mu.Lock()
@@ -635,10 +837,15 @@ func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
t.WriteHeader(s, nil)
}
r := bytes.NewBuffer(data)
+ var (
+ p []byte
+ oqv uint32
+ )
for {
- if r.Len() == 0 {
+ if r.Len() == 0 && p == nil {
return nil
}
+ oqv = atomic.LoadUint32(&t.outQuotaVersion)
size := http2MaxFrameLen
// Wait until the stream has some quota to send the data.
sq, err := wait(s.ctx, nil, nil, t.shutdownChan, s.sendQuotaPool.acquire())
@@ -656,7 +863,9 @@ func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
if tq < size {
size = tq
}
- p := r.Next(size)
+ if p == nil {
+ p = r.Next(size)
+ }
ps := len(p)
if ps < sq {
// Overbooked stream quota. Return it back.
@@ -693,14 +902,30 @@ func (t *http2Server) Write(s *Stream, data []byte, opts *Options) error {
return ContextErr(s.ctx.Err())
default:
}
+ if oqv != atomic.LoadUint32(&t.outQuotaVersion) {
+ // InitialWindowSize settings frame must have been received after we
+ // acquired send quota but before we got the writable channel.
+ // We must forsake this write.
+ t.sendQuotaPool.add(ps)
+ s.sendQuotaPool.add(ps)
+ if t.framer.adjustNumWriters(-1) == 0 {
+ t.controlBuf.put(&flushIO{})
+ }
+ t.writableChan <- 0
+ continue
+ }
var forceFlush bool
if r.Len() == 0 && t.framer.adjustNumWriters(0) == 1 && !opts.Last {
forceFlush = true
}
+ // Reset ping strikes when sending data since this might cause
+ // the peer to send ping.
+ atomic.StoreUint32(&t.resetPingStrikes, 1)
if err := t.framer.writeData(forceFlush, s.id, false, p); err != nil {
t.Close()
return connectionErrorf(true, err, "transport: %v", err)
}
+ p = nil
if t.framer.adjustNumWriters(-1) == 0 {
t.framer.flushWrite()
}
@@ -715,14 +940,97 @@ func (t *http2Server) applySettings(ss []http2.Setting) {
t.mu.Lock()
defer t.mu.Unlock()
for _, stream := range t.activeStreams {
- stream.sendQuotaPool.add(int(s.Val - t.streamSendQuota))
+ stream.sendQuotaPool.add(int(s.Val) - int(t.streamSendQuota))
}
t.streamSendQuota = s.Val
+ atomic.AddUint32(&t.outQuotaVersion, 1)
}
}
}
+// keepalive running in a separate goroutine does the following:
+// 1. Gracefully closes an idle connection after a duration of keepalive.MaxConnectionIdle.
+// 2. Gracefully closes any connection after a duration of keepalive.MaxConnectionAge.
+// 3. Forcibly closes a connection after an additive period of keepalive.MaxConnectionAgeGrace over keepalive.MaxConnectionAge.
+// 4. Makes sure a connection is alive by sending pings with a frequency of keepalive.Time and closes a non-responsive connection
+// after an additional duration of keepalive.Timeout.
+func (t *http2Server) keepalive() {
+ p := &ping{}
+ var pingSent bool
+ maxIdle := time.NewTimer(t.kp.MaxConnectionIdle)
+ maxAge := time.NewTimer(t.kp.MaxConnectionAge)
+ keepalive := time.NewTimer(t.kp.Time)
+ // NOTE: All exit paths of this function should reset their
+ // respecitve timers. A failure to do so will cause the
+ // following clean-up to deadlock and eventually leak.
+ defer func() {
+ if !maxIdle.Stop() {
+ <-maxIdle.C
+ }
+ if !maxAge.Stop() {
+ <-maxAge.C
+ }
+ if !keepalive.Stop() {
+ <-keepalive.C
+ }
+ }()
+ for {
+ select {
+ case <-maxIdle.C:
+ t.mu.Lock()
+ idle := t.idle
+ if idle.IsZero() { // The connection is non-idle.
+ t.mu.Unlock()
+ maxIdle.Reset(t.kp.MaxConnectionIdle)
+ continue
+ }
+ val := t.kp.MaxConnectionIdle - time.Since(idle)
+ t.mu.Unlock()
+ if val <= 0 {
+ // The connection has been idle for a duration of keepalive.MaxConnectionIdle or more.
+ // Gracefully close the connection.
+ t.drain(http2.ErrCodeNo, []byte{})
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ maxIdle.Reset(infinity)
+ return
+ }
+ maxIdle.Reset(val)
+ case <-maxAge.C:
+ t.drain(http2.ErrCodeNo, []byte{})
+ maxAge.Reset(t.kp.MaxConnectionAgeGrace)
+ select {
+ case <-maxAge.C:
+ // Close the connection after grace period.
+ t.Close()
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ maxAge.Reset(infinity)
+ case <-t.shutdownChan:
+ }
+ return
+ case <-keepalive.C:
+ if atomic.CompareAndSwapUint32(&t.activity, 1, 0) {
+ pingSent = false
+ keepalive.Reset(t.kp.Time)
+ continue
+ }
+ if pingSent {
+ t.Close()
+ // Reseting the timer so that the clean-up doesn't deadlock.
+ keepalive.Reset(infinity)
+ return
+ }
+ pingSent = true
+ t.controlBuf.put(p)
+ keepalive.Reset(t.kp.Timeout)
+ case <-t.shutdownChan:
+ return
+ }
+ }
+}
+
+var goAwayPing = &ping{data: [8]byte{1, 6, 1, 8, 0, 3, 3, 9}}
+
// controller running in a separate goroutine takes charge of sending control
// frames (e.g., window update, reset stream, setting, etc.) to the server.
func (t *http2Server) controller() {
@@ -734,7 +1042,7 @@ func (t *http2Server) controller() {
case <-t.writableChan:
switch i := i.(type) {
case *windowUpdate:
- t.framer.writeWindowUpdate(true, i.streamID, i.increment)
+ t.framer.writeWindowUpdate(i.flush, i.streamID, i.increment)
case *settings:
if i.ack {
t.framer.writeSettingsAck(true)
@@ -752,15 +1060,47 @@ func (t *http2Server) controller() {
return
}
sid := t.maxStreamID
- t.state = draining
+ if !i.headsUp {
+ // Stop accepting more streams now.
+ t.state = draining
+ t.mu.Unlock()
+ t.framer.writeGoAway(true, sid, i.code, i.debugData)
+ if i.closeConn {
+ // Abruptly close the connection following the GoAway.
+ t.Close()
+ }
+ t.writableChan <- 0
+ continue
+ }
t.mu.Unlock()
- t.framer.writeGoAway(true, sid, http2.ErrCodeNo, nil)
+ // For a graceful close, send out a GoAway with stream ID of MaxUInt32,
+ // Follow that with a ping and wait for the ack to come back or a timer
+ // to expire. During this time accept new streams since they might have
+ // originated before the GoAway reaches the client.
+ // After getting the ack or timer expiration send out another GoAway this
+ // time with an ID of the max stream server intends to process.
+ t.framer.writeGoAway(true, math.MaxUint32, http2.ErrCodeNo, []byte{})
+ t.framer.writePing(true, false, goAwayPing.data)
+ go func() {
+ timer := time.NewTimer(time.Minute)
+ defer timer.Stop()
+ select {
+ case <-t.drainChan:
+ case <-timer.C:
+ case <-t.shutdownChan:
+ return
+ }
+ t.controlBuf.put(&goAway{code: i.code, debugData: i.debugData})
+ }()
case *flushIO:
t.framer.flushWrite()
case *ping:
+ if !i.ack {
+ t.bdpEst.timesnap(i.data)
+ }
t.framer.writePing(true, i.ack, i.data)
default:
- grpclog.Printf("transport: http2Server.controller got unexpected item type %v\n", i)
+ errorf("transport: http2Server.controller got unexpected item type %v\n", i)
}
t.writableChan <- 0
continue
@@ -804,6 +1144,9 @@ func (t *http2Server) Close() (err error) {
func (t *http2Server) closeStream(s *Stream) {
t.mu.Lock()
delete(t.activeStreams, s.id)
+ if len(t.activeStreams) == 0 {
+ t.idle = time.Now()
+ }
if t.state == draining && len(t.activeStreams) == 0 {
defer t.Close()
}
@@ -813,11 +1156,6 @@ func (t *http2Server) closeStream(s *Stream) {
// called to interrupt the potential blocking on other goroutines.
s.cancel()
s.mu.Lock()
- if q := s.fc.resetPendingData(); q > 0 {
- if w := t.fc.onRead(q); w > 0 {
- t.controlBuf.put(&windowUpdate{0, w})
- }
- }
if s.state == streamDone {
s.mu.Unlock()
return
@@ -831,5 +1169,27 @@ func (t *http2Server) RemoteAddr() net.Addr {
}
func (t *http2Server) Drain() {
- t.controlBuf.put(&goAway{})
+ t.drain(http2.ErrCodeNo, []byte{})
+}
+
+func (t *http2Server) drain(code http2.ErrCode, debugData []byte) {
+ t.mu.Lock()
+ defer t.mu.Unlock()
+ if t.drainChan != nil {
+ return
+ }
+ t.drainChan = make(chan struct{})
+ t.controlBuf.put(&goAway{code: code, debugData: debugData, headsUp: true})
+}
+
+var rgen = rand.New(rand.NewSource(time.Now().UnixNano()))
+
+func getJitter(v time.Duration) time.Duration {
+ if v == infinity {
+ return 0
+ }
+ // Generate a jitter between +/- 10% of the value.
+ r := int64(v / 10)
+ j := rgen.Int63n(2*r) - r
+ return time.Duration(j)
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http_util.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http_util.go
index a3c68d4c..685c6fbf 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http_util.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/http_util.go
@@ -1,33 +1,18 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
@@ -36,24 +21,25 @@ package transport
import (
"bufio"
"bytes"
+ "encoding/base64"
"fmt"
"io"
"net"
+ "net/http"
"strconv"
"strings"
"sync/atomic"
"time"
+ "github.com/golang/protobuf/proto"
"golang.org/x/net/http2"
"golang.org/x/net/http2/hpack"
+ spb "google.golang.org/genproto/googleapis/rpc/status"
"google.golang.org/grpc/codes"
- "google.golang.org/grpc/grpclog"
- "google.golang.org/grpc/metadata"
+ "google.golang.org/grpc/status"
)
const (
- // The primary user agent
- primaryUA = "grpc-go/1.0"
// http2MaxFrameLen specifies the max length of a HTTP2 frame.
http2MaxFrameLen = 16384 // 16KB frame
// http://http2.github.io/http2-spec/#SettingValues
@@ -87,18 +73,39 @@ var (
codes.ResourceExhausted: http2.ErrCodeEnhanceYourCalm,
codes.PermissionDenied: http2.ErrCodeInadequateSecurity,
}
+ httpStatusConvTab = map[int]codes.Code{
+ // 400 Bad Request - INTERNAL.
+ http.StatusBadRequest: codes.Internal,
+ // 401 Unauthorized - UNAUTHENTICATED.
+ http.StatusUnauthorized: codes.Unauthenticated,
+ // 403 Forbidden - PERMISSION_DENIED.
+ http.StatusForbidden: codes.PermissionDenied,
+ // 404 Not Found - UNIMPLEMENTED.
+ http.StatusNotFound: codes.Unimplemented,
+ // 429 Too Many Requests - UNAVAILABLE.
+ http.StatusTooManyRequests: codes.Unavailable,
+ // 502 Bad Gateway - UNAVAILABLE.
+ http.StatusBadGateway: codes.Unavailable,
+ // 503 Service Unavailable - UNAVAILABLE.
+ http.StatusServiceUnavailable: codes.Unavailable,
+ // 504 Gateway timeout - UNAVAILABLE.
+ http.StatusGatewayTimeout: codes.Unavailable,
+ }
)
// Records the states during HPACK decoding. Must be reset once the
// decoding of the entire headers are finished.
type decodeState struct {
- err error // first error encountered decoding
-
encoding string
- // statusCode caches the stream status received from the trailer
- // the server sent. Client side only.
- statusCode codes.Code
- statusDesc string
+ // statusGen caches the stream status received from the trailer the server
+ // sent. Client side only. Do not access directly. After all trailers are
+ // parsed, use the status method to retrieve the status.
+ statusGen *status.Status
+ // rawStatusCode and rawStatusMsg are set from the raw trailer fields and are not
+ // intended for direct access outside of parsing.
+ rawStatusCode *int
+ rawStatusMsg string
+ httpStatus *int
// Server side only fields.
timeoutSet bool
timeout time.Duration
@@ -121,6 +128,7 @@ func isReservedHeader(hdr string) bool {
"grpc-message",
"grpc-status",
"grpc-timeout",
+ "grpc-status-details-bin",
"te":
return true
default:
@@ -139,12 +147,6 @@ func isWhitelistedPseudoHeader(hdr string) bool {
}
}
-func (d *decodeState) setErr(err error) {
- if d.err == nil {
- d.err = err
- }
-}
-
func validContentType(t string) bool {
e := "application/grpc"
if !strings.HasPrefix(t, e) {
@@ -158,56 +160,135 @@ func validContentType(t string) bool {
return true
}
-func (d *decodeState) processHeaderField(f hpack.HeaderField) {
+func (d *decodeState) status() *status.Status {
+ if d.statusGen == nil {
+ // No status-details were provided; generate status using code/msg.
+ d.statusGen = status.New(codes.Code(int32(*(d.rawStatusCode))), d.rawStatusMsg)
+ }
+ return d.statusGen
+}
+
+const binHdrSuffix = "-bin"
+
+func encodeBinHeader(v []byte) string {
+ return base64.RawStdEncoding.EncodeToString(v)
+}
+
+func decodeBinHeader(v string) ([]byte, error) {
+ if len(v)%4 == 0 {
+ // Input was padded, or padding was not necessary.
+ return base64.StdEncoding.DecodeString(v)
+ }
+ return base64.RawStdEncoding.DecodeString(v)
+}
+
+func encodeMetadataHeader(k, v string) string {
+ if strings.HasSuffix(k, binHdrSuffix) {
+ return encodeBinHeader(([]byte)(v))
+ }
+ return v
+}
+
+func decodeMetadataHeader(k, v string) (string, error) {
+ if strings.HasSuffix(k, binHdrSuffix) {
+ b, err := decodeBinHeader(v)
+ return string(b), err
+ }
+ return v, nil
+}
+
+func (d *decodeState) decodeResponseHeader(frame *http2.MetaHeadersFrame) error {
+ for _, hf := range frame.Fields {
+ if err := d.processHeaderField(hf); err != nil {
+ return err
+ }
+ }
+
+ // If grpc status exists, no need to check further.
+ if d.rawStatusCode != nil || d.statusGen != nil {
+ return nil
+ }
+
+ // If grpc status doesn't exist and http status doesn't exist,
+ // then it's a malformed header.
+ if d.httpStatus == nil {
+ return streamErrorf(codes.Internal, "malformed header: doesn't contain status(gRPC or HTTP)")
+ }
+
+ if *(d.httpStatus) != http.StatusOK {
+ code, ok := httpStatusConvTab[*(d.httpStatus)]
+ if !ok {
+ code = codes.Unknown
+ }
+ return streamErrorf(code, http.StatusText(*(d.httpStatus)))
+ }
+
+ // gRPC status doesn't exist and http status is OK.
+ // Set rawStatusCode to be unknown and return nil error.
+ // So that, if the stream has ended this Unknown status
+ // will be propogated to the user.
+ // Otherwise, it will be ignored. In which case, status from
+ // a later trailer, that has StreamEnded flag set, is propogated.
+ code := int(codes.Unknown)
+ d.rawStatusCode = &code
+ return nil
+
+}
+
+func (d *decodeState) processHeaderField(f hpack.HeaderField) error {
switch f.Name {
case "content-type":
if !validContentType(f.Value) {
- d.setErr(streamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value))
- return
+ return streamErrorf(codes.FailedPrecondition, "transport: received the unexpected content-type %q", f.Value)
}
case "grpc-encoding":
d.encoding = f.Value
case "grpc-status":
code, err := strconv.Atoi(f.Value)
if err != nil {
- d.setErr(streamErrorf(codes.Internal, "transport: malformed grpc-status: %v", err))
- return
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status: %v", err)
}
- d.statusCode = codes.Code(code)
+ d.rawStatusCode = &code
case "grpc-message":
- d.statusDesc = decodeGrpcMessage(f.Value)
+ d.rawStatusMsg = decodeGrpcMessage(f.Value)
+ case "grpc-status-details-bin":
+ v, err := decodeBinHeader(f.Value)
+ if err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err)
+ }
+ s := &spb.Status{}
+ if err := proto.Unmarshal(v, s); err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed grpc-status-details-bin: %v", err)
+ }
+ d.statusGen = status.FromProto(s)
case "grpc-timeout":
d.timeoutSet = true
var err error
- d.timeout, err = decodeTimeout(f.Value)
- if err != nil {
- d.setErr(streamErrorf(codes.Internal, "transport: malformed time-out: %v", err))
- return
+ if d.timeout, err = decodeTimeout(f.Value); err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed time-out: %v", err)
}
case ":path":
d.method = f.Value
+ case ":status":
+ code, err := strconv.Atoi(f.Value)
+ if err != nil {
+ return streamErrorf(codes.Internal, "transport: malformed http-status: %v", err)
+ }
+ d.httpStatus = &code
default:
if !isReservedHeader(f.Name) || isWhitelistedPseudoHeader(f.Name) {
- if f.Name == "user-agent" {
- i := strings.LastIndex(f.Value, " ")
- if i == -1 {
- // There is no application user agent string being set.
- return
- }
- // Extract the application user agent string.
- f.Value = f.Value[:i]
- }
if d.mdata == nil {
d.mdata = make(map[string][]string)
}
- k, v, err := metadata.DecodeKeyValue(f.Name, f.Value)
+ v, err := decodeMetadataHeader(f.Name, f.Value)
if err != nil {
- grpclog.Printf("Failed to decode (%q, %q): %v", f.Name, f.Value, err)
- return
+ errorf("Failed to decode metadata header (%q, %q): %v", f.Name, f.Value, err)
+ return nil
}
- d.mdata[k] = append(d.mdata[k], v)
+ d.mdata[f.Name] = append(d.mdata[f.Name], v)
}
}
+ return nil
}
type timeoutUnit uint8
@@ -379,6 +460,9 @@ func newFramer(conn net.Conn) *framer {
writer: bufio.NewWriterSize(conn, http2IOBufSize),
}
f.fr = http2.NewFramer(f.writer, f.reader)
+ // Opt-in to Frame reuse API on framer to reduce garbage.
+ // Frames aren't safe to read from after a subsequent call to ReadFrame.
+ f.fr.SetReuseFrames()
f.fr.ReadMetaHeaders = hpack.NewDecoder(http2InitHeaderTableSize, nil)
return f
}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/log.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/log.go
new file mode 100644
index 00000000..ac8e358c
--- /dev/null
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/log.go
@@ -0,0 +1,50 @@
+/*
+ *
+ * Copyright 2017 gRPC authors.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+// This file contains wrappers for grpclog functions.
+// The transport package only logs to verbose level 2 by default.
+
+package transport
+
+import "google.golang.org/grpc/grpclog"
+
+const logLevel = 2
+
+func infof(format string, args ...interface{}) {
+ if grpclog.V(logLevel) {
+ grpclog.Infof(format, args...)
+ }
+}
+
+func warningf(format string, args ...interface{}) {
+ if grpclog.V(logLevel) {
+ grpclog.Warningf(format, args...)
+ }
+}
+
+func errorf(format string, args ...interface{}) {
+ if grpclog.V(logLevel) {
+ grpclog.Errorf(format, args...)
+ }
+}
+
+func fatalf(format string, args ...interface{}) {
+ if grpclog.V(logLevel) {
+ grpclog.Fatalf(format, args...)
+ }
+}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/pre_go16.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/pre_go16.go
deleted file mode 100644
index 33d91c17..00000000
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/pre_go16.go
+++ /dev/null
@@ -1,51 +0,0 @@
-// +build !go1.6
-
-/*
- * Copyright 2016, Google Inc.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- */
-
-package transport
-
-import (
- "net"
- "time"
-
- "golang.org/x/net/context"
-)
-
-// dialContext connects to the address on the named network.
-func dialContext(ctx context.Context, network, address string) (net.Conn, error) {
- var dialer net.Dialer
- if deadline, ok := ctx.Deadline(); ok {
- dialer.Timeout = deadline.Sub(time.Now())
- }
- return dialer.Dial(network, address)
-}
diff --git a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/transport.go b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/transport.go
index d4659918..ec0fe678 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/transport.go
+++ b/vendor/github.com/hashicorp/terraform/vendor/google.golang.org/grpc/transport/transport.go
@@ -1,54 +1,39 @@
/*
*
- * Copyright 2014, Google Inc.
- * All rights reserved.
+ * Copyright 2014 gRPC authors.
*
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are
- * met:
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
*
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following disclaimer
- * in the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Google Inc. nor the names of its
- * contributors may be used to endorse or promote products derived from
- * this software without specific prior written permission.
+ * http://www.apache.org/licenses/LICENSE-2.0
*
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
*
*/
-/*
-Package transport defines and implements message oriented communication channel
-to complete various transactions (e.g., an RPC).
-*/
+// Package transport defines and implements message oriented communication
+// channel to complete various transactions (e.g., an RPC).
package transport // import "google.golang.org/grpc/transport"
import (
- "bytes"
"fmt"
"io"
"net"
"sync"
"golang.org/x/net/context"
+ "golang.org/x/net/http2"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
+ "google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
+ "google.golang.org/grpc/status"
"google.golang.org/grpc/tap"
)
@@ -62,28 +47,25 @@ type recvMsg struct {
err error
}
-func (*recvMsg) item() {}
-
-// All items in an out of a recvBuffer should be the same type.
-type item interface {
- item()
-}
-
-// recvBuffer is an unbounded channel of item.
+// recvBuffer is an unbounded channel of recvMsg structs.
+// Note recvBuffer differs from controlBuffer only in that recvBuffer
+// holds a channel of only recvMsg structs instead of objects implementing "item" interface.
+// recvBuffer is written to much more often than
+// controlBuffer and using strict recvMsg structs helps avoid allocation in "recvBuffer.put"
type recvBuffer struct {
- c chan item
+ c chan recvMsg
mu sync.Mutex
- backlog []item
+ backlog []recvMsg
}
func newRecvBuffer() *recvBuffer {
b := &recvBuffer{
- c: make(chan item, 1),
+ c: make(chan recvMsg, 1),
}
return b
}
-func (b *recvBuffer) put(r item) {
+func (b *recvBuffer) put(r recvMsg) {
b.mu.Lock()
defer b.mu.Unlock()
if len(b.backlog) == 0 {
@@ -102,17 +84,18 @@ func (b *recvBuffer) load() {
if len(b.backlog) > 0 {
select {
case b.c <- b.backlog[0]:
+ b.backlog[0] = recvMsg{}
b.backlog = b.backlog[1:]
default:
}
}
}
-// get returns the channel that receives an item in the buffer.
+// get returns the channel that receives a recvMsg in the buffer.
//
-// Upon receipt of an item, the caller should call load to send another
-// item onto the channel if there is any.
-func (b *recvBuffer) get() <-chan item {
+// Upon receipt of a recvMsg, the caller should call load to send another
+// recvMsg onto the channel if there is any.
+func (b *recvBuffer) get() <-chan recvMsg {
return b.c
}
@@ -122,7 +105,7 @@ type recvBufferReader struct {
ctx context.Context
goAway chan struct{}
recv *recvBuffer
- last *bytes.Reader // Stores the remaining data in the previous calls.
+ last []byte // Stores the remaining data in the previous calls.
err error
}
@@ -133,27 +116,86 @@ func (r *recvBufferReader) Read(p []byte) (n int, err error) {
if r.err != nil {
return 0, r.err
}
- defer func() { r.err = err }()
- if r.last != nil && r.last.Len() > 0 {
+ n, r.err = r.read(p)
+ return n, r.err
+}
+
+func (r *recvBufferReader) read(p []byte) (n int, err error) {
+ if r.last != nil && len(r.last) > 0 {
// Read remaining data left in last call.
- return r.last.Read(p)
+ copied := copy(p, r.last)
+ r.last = r.last[copied:]
+ return copied, nil
}
select {
case <-r.ctx.Done():
return 0, ContextErr(r.ctx.Err())
case <-r.goAway:
return 0, ErrStreamDrain
- case i := <-r.recv.get():
+ case m := <-r.recv.get():
r.recv.load()
- m := i.(*recvMsg)
if m.err != nil {
return 0, m.err
}
- r.last = bytes.NewReader(m.data)
- return r.last.Read(p)
+ copied := copy(p, m.data)
+ r.last = m.data[copied:]
+ return copied, nil
+ }
+}
+
+// All items in an out of a controlBuffer should be the same type.
+type item interface {
+ item()
+}
+
+// controlBuffer is an unbounded channel of item.
+type controlBuffer struct {
+ c chan item
+ mu sync.Mutex
+ backlog []item
+}
+
+func newControlBuffer() *controlBuffer {
+ b := &controlBuffer{
+ c: make(chan item, 1),
+ }
+ return b
+}
+
+func (b *controlBuffer) put(r item) {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if len(b.backlog) == 0 {
+ select {
+ case b.c <- r:
+ return
+ default:
+ }
+ }
+ b.backlog = append(b.backlog, r)
+}
+
+func (b *controlBuffer) load() {
+ b.mu.Lock()
+ defer b.mu.Unlock()
+ if len(b.backlog) > 0 {
+ select {
+ case b.c <- b.backlog[0]:
+ b.backlog[0] = nil
+ b.backlog = b.backlog[1:]
+ default:
+ }
}
}
+// get returns the channel that receives an item in the buffer.
+//
+// Upon receipt of an item, the caller should call load to send another
+// item onto the channel if there is any.
+func (b *controlBuffer) get() <-chan item {
+ return b.c
+}
+
type streamState uint8
const (
@@ -168,11 +210,6 @@ type Stream struct {
id uint32
// nil for client side Stream.
st ServerTransport
- // clientStatsCtx keeps the user context for stats handling.
- // It's only valid on client side. Server side stats context is same as s.ctx.
- // All client side stats collection should use the clientStatsCtx (instead of the stream context)
- // so that all the generated stats for a particular RPC can be associated in the processing phase.
- clientStatsCtx context.Context
// ctx is the associated context of the stream.
ctx context.Context
// cancel is always nil for client side Stream.
@@ -186,14 +223,17 @@ type Stream struct {
recvCompress string
sendCompress string
buf *recvBuffer
- dec io.Reader
+ trReader io.Reader
fc *inFlow
recvQuota uint32
+
+ // TODO: Remote this unused variable.
// The accumulated inbound quota pending for window update.
updateQuota uint32
- // The handler to control the window update procedure for both this
- // particular stream and the associated transport.
- windowHandler func(int)
+
+ // Callback to state application's intentions to read data. This
+ // is used to adjust flow control, if need be.
+ requestRead func(int)
sendQuotaPool *quotaPool
// Close headerChan to indicate the end of reception of header metadata.
@@ -210,9 +250,17 @@ type Stream struct {
// true iff headerChan is closed. Used to avoid closing headerChan
// multiple times.
headerDone bool
- // the status received from the server.
- statusCode codes.Code
- statusDesc string
+ // the status error received from the server.
+ status *status.Status
+ // rstStream indicates whether a RST_STREAM frame needs to be sent
+ // to the server to signify that this stream is closing.
+ rstStream bool
+ // rstError is the error that needs to be sent along with the RST_STREAM frame.
+ rstError http2.ErrCode
+ // bytesSent and bytesReceived indicates whether any bytes have been sent or
+ // received on this stream.
+ bytesSent bool
+ bytesReceived bool
}
// RecvCompress returns the compression algorithm applied to the inbound
@@ -240,16 +288,24 @@ func (s *Stream) GoAway() <-chan struct{} {
// Header acquires the key-value pairs of header metadata once it
// is available. It blocks until i) the metadata is ready or ii) there is no
-// header metadata or iii) the stream is cancelled/expired.
+// header metadata or iii) the stream is canceled/expired.
func (s *Stream) Header() (metadata.MD, error) {
+ var err error
select {
case <-s.ctx.Done():
- return nil, ContextErr(s.ctx.Err())
+ err = ContextErr(s.ctx.Err())
case <-s.goAway:
- return nil, ErrStreamDrain
+ err = ErrStreamDrain
+ case <-s.headerChan:
+ return s.header.Copy(), nil
+ }
+ // Even if the stream is closed, header is returned if available.
+ select {
case <-s.headerChan:
return s.header.Copy(), nil
+ default:
}
+ return nil, err
}
// Trailer returns the cached trailer metedata. Note that if it is not called
@@ -277,14 +333,9 @@ func (s *Stream) Method() string {
return s.method
}
-// StatusCode returns statusCode received from the server.
-func (s *Stream) StatusCode() codes.Code {
- return s.statusCode
-}
-
-// StatusDesc returns statusDesc received from the server.
-func (s *Stream) StatusDesc() string {
- return s.statusDesc
+// Status returns the status received from the server.
+func (s *Stream) Status() *status.Status {
+ return s.status
}
// SetHeader sets the header metadata. This can be called multiple times.
@@ -315,22 +366,69 @@ func (s *Stream) SetTrailer(md metadata.MD) error {
}
func (s *Stream) write(m recvMsg) {
- s.buf.put(&m)
+ s.buf.put(m)
}
-// Read reads all the data available for this Stream from the transport and
+// Read reads all p bytes from the wire for this stream.
+func (s *Stream) Read(p []byte) (n int, err error) {
+ // Don't request a read if there was an error earlier
+ if er := s.trReader.(*transportReader).er; er != nil {
+ return 0, er
+ }
+ s.requestRead(len(p))
+ return io.ReadFull(s.trReader, p)
+}
+
+// tranportReader reads all the data available for this Stream from the transport and
// passes them into the decoder, which converts them into a gRPC message stream.
// The error is io.EOF when the stream is done or another non-nil error if
// the stream broke.
-func (s *Stream) Read(p []byte) (n int, err error) {
- n, err = s.dec.Read(p)
+type transportReader struct {
+ reader io.Reader
+ // The handler to control the window update procedure for both this
+ // particular stream and the associated transport.
+ windowHandler func(int)
+ er error
+}
+
+func (t *transportReader) Read(p []byte) (n int, err error) {
+ n, err = t.reader.Read(p)
if err != nil {
+ t.er = err
return
}
- s.windowHandler(n)
+ t.windowHandler(n)
return
}
+// finish sets the stream's state and status, and closes the done channel.
+// s.mu must be held by the caller. st must always be non-nil.
+func (s *Stream) finish(st *status.Status) {
+ s.status = st
+ s.state = streamDone
+ close(s.done)
+}
+
+// BytesSent indicates whether any bytes have been sent on this stream.
+func (s *Stream) BytesSent() bool {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ return s.bytesSent
+}
+
+// BytesReceived indicates whether any bytes have been received on this stream.
+func (s *Stream) BytesReceived() bool {
+ s.mu.Lock()
+ defer s.mu.Unlock()
+ return s.bytesReceived
+}
+
+// GoString is implemented by Stream so context.String() won't
+// race when printing %#v.
+func (s *Stream) GoString() string {
+ return fmt.Sprintf("<stream: %p, %v>", s, s.method)
+}
+
// The key to save transport.Stream in the context.
type streamKey struct{}
@@ -358,10 +456,14 @@ const (
// ServerConfig consists of all the configurations to establish a server transport.
type ServerConfig struct {
- MaxStreams uint32
- AuthInfo credentials.AuthInfo
- InTapHandle tap.ServerInHandle
- StatsHandler stats.Handler
+ MaxStreams uint32
+ AuthInfo credentials.AuthInfo
+ InTapHandle tap.ServerInHandle
+ StatsHandler stats.Handler
+ KeepaliveParams keepalive.ServerParameters
+ KeepalivePolicy keepalive.EnforcementPolicy
+ InitialWindowSize int32
+ InitialConnWindowSize int32
}
// NewServerTransport creates a ServerTransport with conn or non-nil error
@@ -374,6 +476,9 @@ func NewServerTransport(protocol string, conn net.Conn, config *ServerConfig) (S
type ConnectOptions struct {
// UserAgent is the application user agent.
UserAgent string
+ // Authority is the :authority pseudo-header to use. This field has no effect if
+ // TransportCredentials is set.
+ Authority string
// Dialer specifies how to dial a network address.
Dialer func(context.Context, string) (net.Conn, error)
// FailOnNonTempDialError specifies if gRPC fails on non-temporary dial errors.
@@ -382,8 +487,14 @@ type ConnectOptions struct {
PerRPCCredentials []credentials.PerRPCCredentials
// TransportCredentials stores the Authenticator required to setup a client connection.
TransportCredentials credentials.TransportCredentials
+ // KeepaliveParams stores the keepalive parameters.
+ KeepaliveParams keepalive.ClientParameters
// StatsHandler stores the handler for stats.
StatsHandler stats.Handler
+ // InitialWindowSize sets the intial window size for a stream.
+ InitialWindowSize int32
+ // InitialConnWindowSize sets the intial window size for a connection.
+ InitialConnWindowSize int32
}
// TargetInfo contains the information of the target such as network address and metadata.
@@ -427,10 +538,15 @@ type CallHdr struct {
// outbound message.
SendCompress string
+ // Creds specifies credentials.PerRPCCredentials for a call.
+ Creds credentials.PerRPCCredentials
+
// Flush indicates whether a new stream command should be sent
// to the peer without waiting for the first data. This is
- // only a hint. The transport may modify the flush decision
+ // only a hint.
+ // If it's true, the transport may modify the flush decision
// for performance purposes.
+ // If it's false, new stream will never be flushed.
Flush bool
}
@@ -466,10 +582,13 @@ type ClientTransport interface {
// once the transport is initiated.
Error() <-chan struct{}
- // GoAway returns a channel that is closed when ClientTranspor
+ // GoAway returns a channel that is closed when ClientTransport
// receives the draining signal from the server (e.g., GOAWAY frame in
// HTTP/2).
GoAway() <-chan struct{}
+
+ // GetGoAwayReason returns the reason why GoAway frame was received.
+ GetGoAwayReason() GoAwayReason
}
// ServerTransport is the common interface for all gRPC server-side transport
@@ -489,10 +608,9 @@ type ServerTransport interface {
// Write may not be called on all streams.
Write(s *Stream, data []byte, opts *Options) error
- // WriteStatus sends the status of a stream to the client.
- // WriteStatus is the final call made on a stream and always
- // occurs.
- WriteStatus(s *Stream, statusCode codes.Code, statusDesc string) error
+ // WriteStatus sends the status of a stream to the client. WriteStatus is
+ // the final call made on a stream and always occurs.
+ WriteStatus(s *Stream, st *status.Status) error
// Close tears down the transport. Once it is called, the transport
// should not be accessed any more. All the pending streams and their
@@ -558,6 +676,8 @@ var (
ErrStreamDrain = streamErrorf(codes.Unavailable, "the server stops accepting new RPCs")
)
+// TODO: See if we can replace StreamError with status package errors.
+
// StreamError is an error that only affects one stream within a connection.
type StreamError struct {
Code codes.Code
@@ -565,18 +685,7 @@ type StreamError struct {
}
func (e StreamError) Error() string {
- return fmt.Sprintf("stream error: code = %d desc = %q", e.Code, e.Desc)
-}
-
-// ContextErr converts the error from context package into a StreamError.
-func ContextErr(err error) StreamError {
- switch err {
- case context.DeadlineExceeded:
- return streamErrorf(codes.DeadlineExceeded, "%v", err)
- case context.Canceled:
- return streamErrorf(codes.Canceled, "%v", err)
- }
- panic(fmt.Sprintf("Unexpected error from context packet: %v", err))
+ return fmt.Sprintf("stream error: code = %s desc = %q", e.Code, e.Desc)
}
// wait blocks until it can receive from ctx.Done, closing, or proceed.
@@ -606,3 +715,16 @@ func wait(ctx context.Context, done, goAway, closing <-chan struct{}, proceed <-
return i, nil
}
}
+
+// GoAwayReason contains the reason for the GoAway frame received.
+type GoAwayReason uint8
+
+const (
+ // Invalid indicates that no GoAway frame is received.
+ Invalid GoAwayReason = 0
+ // NoReason is the default value when GoAway frame is received.
+ NoReason GoAwayReason = 1
+ // TooManyPings indicates that a GoAway frame with ErrCodeEnhanceYourCalm
+ // was recieved and that the debug data said "too_many_pings".
+ TooManyPings GoAwayReason = 2
+)
diff --git a/vendor/github.com/hashicorp/terraform/vendor/vendor.json b/vendor/github.com/hashicorp/terraform/vendor/vendor.json
index ab8d5658..4c8d9125 100644
--- a/vendor/github.com/hashicorp/terraform/vendor/vendor.json
+++ b/vendor/github.com/hashicorp/terraform/vendor/vendor.json
@@ -106,12 +106,6 @@
"revisionTime": "2015-08-27T00:49:46Z"
},
{
- "checksumSHA1": "YfhpW3cu1CHWX7lUCRparOJ6Vy4=",
- "path": "github.com/armon/go-metrics",
- "revision": "93f237eba9b0602f3e73710416558854a81d9337",
- "revisionTime": "2017-01-14T13:47:37Z"
- },
- {
"checksumSHA1": "gNO0JNpLzYOdInGeq7HqMZUzx9M=",
"path": "github.com/armon/go-radix",
"revision": "4239b77079c7b5d1243b7b4736304ce8ddb6f0f2",
@@ -851,6 +845,30 @@
"revisionTime": "2016-11-17T03:31:26Z"
},
{
+ "checksumSHA1": "5UJZd7Zyo40vk1OjMTy6LWjTcss=",
+ "path": "github.com/golang/protobuf/ptypes",
+ "revision": "1909bc2f63dc92bb931deace8b8312c4db72d12f",
+ "revisionTime": "2017-08-08T02:16:21Z"
+ },
+ {
+ "checksumSHA1": "Z4RIWIXH05QItZqVbmbONO9mWig=",
+ "path": "github.com/golang/protobuf/ptypes/any",
+ "revision": "1909bc2f63dc92bb931deace8b8312c4db72d12f",
+ "revisionTime": "2017-08-08T02:16:21Z"
+ },
+ {
+ "checksumSHA1": "Lx2JRhnmO66Lhj6p7UXnsPb+IQs=",
+ "path": "github.com/golang/protobuf/ptypes/duration",
+ "revision": "1909bc2f63dc92bb931deace8b8312c4db72d12f",
+ "revisionTime": "2017-08-08T02:16:21Z"
+ },
+ {
+ "checksumSHA1": "+nsb2jDuP/5l2DO78dtU/jYB3G8=",
+ "path": "github.com/golang/protobuf/ptypes/timestamp",
+ "revision": "1909bc2f63dc92bb931deace8b8312c4db72d12f",
+ "revisionTime": "2017-08-08T02:16:21Z"
+ },
+ {
"checksumSHA1": "V/53BpqgOkSDZCX6snQCAkdO2fM=",
"path": "github.com/googleapis/gax-go",
"revision": "da06d194a00e19ce00d9011a13931c3f6f6887c7",
@@ -1177,35 +1195,29 @@
"revisionTime": "2016-11-07T20:49:10Z"
},
{
- "checksumSHA1": "jfELEMRhiTcppZmRH+ZwtkVS5Uw=",
- "path": "github.com/hashicorp/consul/acl",
- "revision": "144a5e5340893a5e726e831c648f26dc19fef1e7",
- "revisionTime": "2017-03-10T23:35:18Z"
- },
- {
- "checksumSHA1": "ygEjA1d52B1RDmZu8+1WTwkrYDQ=",
+ "checksumSHA1": "IYuLg7xUzsf/P9rMpdEh1n9rbIY=",
"comment": "v0.6.3-28-g3215b87",
"path": "github.com/hashicorp/consul/api",
- "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2",
- "revisionTime": "2017-02-07T15:38:46Z"
+ "revision": "b79d951ced8c5f18fe73d35b2806f3435e40cd64",
+ "revisionTime": "2017-07-20T03:19:26Z",
+ "version": "v0.9.0",
+ "versionExact": "v0.9.0"
},
{
- "checksumSHA1": "nomqbPd9j3XelMMcv7+vTEPsdr4=",
- "path": "github.com/hashicorp/consul/consul/structs",
- "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2",
- "revisionTime": "2017-02-07T15:38:46Z"
- },
- {
- "checksumSHA1": "dgYoWTG7nIL9CUBuktDvMZqYDR8=",
+ "checksumSHA1": "++0PVBxbpylmllyCxSa7cdc6dDc=",
"path": "github.com/hashicorp/consul/testutil",
- "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2",
- "revisionTime": "2017-02-07T15:38:46Z"
+ "revision": "b79d951ced8c5f18fe73d35b2806f3435e40cd64",
+ "revisionTime": "2017-07-20T03:19:26Z",
+ "version": "v0.9.0",
+ "versionExact": "v0.9.0"
},
{
- "checksumSHA1": "ZPDLNuKJGZJFV9HlJ/V0O4/c/Ko=",
- "path": "github.com/hashicorp/consul/types",
- "revision": "48d7b069ad443a48ffa93364048ff8909b5d1fa2",
- "revisionTime": "2017-02-07T15:38:46Z"
+ "checksumSHA1": "J8TTDc84MvAyXE/FrfgS+xc/b6s=",
+ "path": "github.com/hashicorp/consul/testutil/retry",
+ "revision": "b79d951ced8c5f18fe73d35b2806f3435e40cd64",
+ "revisionTime": "2017-07-20T03:19:26Z",
+ "version": "v0.9.0",
+ "versionExact": "v0.9.0"
},
{
"checksumSHA1": "cdOCt0Yb+hdErz8NAQqayxPmRsY=",
@@ -1236,10 +1248,10 @@
"revisionTime": "2017-02-07T21:55:32Z"
},
{
- "checksumSHA1": "TNlVzNR1OaajcNi3CbQ3bGbaLGU=",
- "path": "github.com/hashicorp/go-msgpack/codec",
- "revision": "fa3f63826f7c23912c15263591e65d54d080b458",
- "revisionTime": "2015-05-18T23:42:57Z"
+ "checksumSHA1": "miVF4/7JP0lRwZvFJGKwZWk7aAQ=",
+ "path": "github.com/hashicorp/go-hclog",
+ "revision": "b4e5765d1e5f00a0550911084f45f8214b5b83b9",
+ "revisionTime": "2017-07-16T17:45:23Z"
},
{
"checksumSHA1": "lrSl49G23l6NhfilxPM0XFs5rZo=",
@@ -1247,10 +1259,10 @@
"revision": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5"
},
{
- "checksumSHA1": "b0nQutPMJHeUmz4SjpreotAo6Yk=",
+ "checksumSHA1": "R6me0jVmcT/OPo80Fe0qo5fRwHc=",
"path": "github.com/hashicorp/go-plugin",
- "revision": "f72692aebca2008343a9deb06ddb4b17f7051c15",
- "revisionTime": "2017-02-17T16:27:05Z"
+ "revision": "a5174f84d7f8ff00fb07ab4ef1f380d32eee0e63",
+ "revisionTime": "2017-08-16T15:18:19Z"
},
{
"checksumSHA1": "ErJHGU6AVPZM9yoY/xV11TwSjQs=",
@@ -1276,18 +1288,6 @@
"revisionTime": "2016-10-31T18:26:05Z"
},
{
- "checksumSHA1": "d9PxF1XQGLMJZRct2R8qVM/eYlE=",
- "path": "github.com/hashicorp/golang-lru",
- "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6",
- "revisionTime": "2016-08-13T22:13:03Z"
- },
- {
- "checksumSHA1": "9hffs0bAIU6CquiRhKQdzjHnKt0=",
- "path": "github.com/hashicorp/golang-lru/simplelru",
- "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6",
- "revisionTime": "2016-08-13T22:13:03Z"
- },
- {
"checksumSHA1": "o3XZZdOnSnwQSpYw215QV75ZDeI=",
"path": "github.com/hashicorp/hcl",
"revision": "a4b07c25de5ff55ad3b8936cea69a79a3d95a855",
@@ -1384,12 +1384,6 @@
"revisionTime": "2015-06-09T07:04:31Z"
},
{
- "checksumSHA1": "wpirHJV/6VEbbD+HyAP2/6Xc0ek=",
- "path": "github.com/hashicorp/raft",
- "revision": "aaad9f10266e089bd401e7a6487651a69275641b",
- "revisionTime": "2016-11-10T00:52:40Z"
- },
- {
"checksumSHA1": "o8In5byYGDCY/mnTuV4Tfmci+3w=",
"comment": "v0.7.0-12-ge4ec8cc",
"path": "github.com/hashicorp/serf/coordinate",
@@ -1586,10 +1580,10 @@
"revisionTime": "2017-01-23T01:43:24Z"
},
{
- "checksumSHA1": "7niW29CvYceZ6zbia6b/LT+yD/M=",
+ "checksumSHA1": "KXrCoifaKi3Wy4zbCfXTtM/FO48=",
"path": "github.com/mitchellh/cli",
- "revision": "fcf521421aa29bde1d93b6920dfce826d7932208",
- "revisionTime": "2016-08-15T18:46:15Z"
+ "revision": "b633c78680fa6fb27ac81694f38c28f79602ebd9",
+ "revisionTime": "2017-08-14T15:07:37Z"
},
{
"checksumSHA1": "ttEN1Aupb7xpPMkQLqb3tzLFdXs=",
@@ -1615,6 +1609,12 @@
"revision": "07bab5fdd9580500aea6ada0e09df4aa28e68abd"
},
{
+ "checksumSHA1": "6TBW88DSxRHf4WvOC9K5ilBZx/8=",
+ "path": "github.com/mitchellh/go-testing-interface",
+ "revision": "9a441910b16872f7b8283682619b3761a9aa2222",
+ "revisionTime": "2017-07-30T05:09:07Z"
+ },
+ {
"checksumSHA1": "xyoJKalfQwTUN1qzZGQKWYAwl0A=",
"path": "github.com/mitchellh/hashstructure",
"revision": "6b17d669fac5e2f71c16658d781ec3fdd3802b69"
@@ -1660,6 +1660,36 @@
"revision": "3d184cea22ee1c41ec1697e0d830ff0c78f7ea97"
},
{
+ "checksumSHA1": "rJab1YdNhQooDiBWNnt7TLWPyBU=",
+ "path": "github.com/pkg/errors",
+ "revision": "c605e284fe17294bda444b34710735b29d1a9d90",
+ "revisionTime": "2017-05-05T04:36:39Z"
+ },
+ {
+ "checksumSHA1": "6OEUkwOM0qgI6YxR+BDEn6YMvpU=",
+ "path": "github.com/posener/complete",
+ "revision": "f4461a52b6329c11190f11fe3384ec8aa964e21c",
+ "revisionTime": "2017-07-30T19:30:24Z"
+ },
+ {
+ "checksumSHA1": "NB7uVS0/BJDmNu68vPAlbrq4TME=",
+ "path": "github.com/posener/complete/cmd",
+ "revision": "f4461a52b6329c11190f11fe3384ec8aa964e21c",
+ "revisionTime": "2017-07-30T19:30:24Z"
+ },
+ {
+ "checksumSHA1": "kuS9vs+TMQzTGzXteL6EZ5HuKrU=",
+ "path": "github.com/posener/complete/cmd/install",
+ "revision": "f4461a52b6329c11190f11fe3384ec8aa964e21c",
+ "revisionTime": "2017-07-30T19:30:24Z"
+ },
+ {
+ "checksumSHA1": "DMo94FwJAm9ZCYCiYdJU2+bh4no=",
+ "path": "github.com/posener/complete/match",
+ "revision": "f4461a52b6329c11190f11fe3384ec8aa964e21c",
+ "revisionTime": "2017-07-30T19:30:24Z"
+ },
+ {
"checksumSHA1": "Yqr9OQ7AuIB1N0HMbSxqEoVQG+k=",
"comment": "v2.0.1-8-g983d3a5",
"path": "github.com/ryanuber/columnize",
@@ -1811,16 +1841,16 @@
"revisionTime": "2017-06-03T08:13:02Z"
},
{
- "checksumSHA1": "N1akwAdrHVfPPrsFOhG2ouP21VA=",
+ "checksumSHA1": "CLeUeDDAFQGbUNyRIryrVXqkWf0=",
"path": "golang.org/x/net/http2",
- "revision": "f2499483f923065a842d38eb4c7f1927e6fc6e6d",
- "revisionTime": "2017-01-14T04:22:49Z"
+ "revision": "1c05540f6879653db88113bc4a2b70aec4bd491f",
+ "revisionTime": "2017-08-04T00:04:37Z"
},
{
- "checksumSHA1": "HzuGD7AwgC0p1az1WAQnEFnEk98=",
+ "checksumSHA1": "ezWhc7n/FtqkLDQKeU2JbW+80tE=",
"path": "golang.org/x/net/http2/hpack",
- "revision": "f2499483f923065a842d38eb4c7f1927e6fc6e6d",
- "revisionTime": "2017-01-14T04:22:49Z"
+ "revision": "1c05540f6879653db88113bc4a2b70aec4bd491f",
+ "revisionTime": "2017-08-04T00:04:37Z"
},
{
"checksumSHA1": "GIGmSrYACByf5JDIP9ByBZksY80=",
@@ -1948,70 +1978,112 @@
"revision": "b667a5000b082e49c6c6d16867d376a12e9490cd"
},
{
- "checksumSHA1": "epHwh7hDQSYzDowPIbw8vnLzPS0=",
+ "checksumSHA1": "AvVpgwhxhJgjoSledwDtYrEKVE4=",
+ "path": "google.golang.org/genproto/googleapis/rpc/status",
+ "revision": "09f6ed296fc66555a25fe4ce95173148778dfa85",
+ "revisionTime": "2017-07-31T18:20:57Z"
+ },
+ {
+ "checksumSHA1": "nwfmMh930HtXA7u5HYomxSR3Ixg=",
"path": "google.golang.org/grpc",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "08icuA15HRkdYCt6H+Cs90RPQsY=",
+ "checksumSHA1": "/eTpFgjvMq5Bc9hYnw5fzKG4B6I=",
"path": "google.golang.org/grpc/codes",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "AGkvu7gY1jWK7v5s9a8qLlH2gcQ=",
+ "checksumSHA1": "XH2WYcDNwVO47zYShREJjcYXm0Y=",
+ "path": "google.golang.org/grpc/connectivity",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "5ylThBvJnIcyWhL17AC9+Sdbw2E=",
"path": "google.golang.org/grpc/credentials",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "2NbY9kmMweE4VUsruRsvmViVnNg=",
+ "path": "google.golang.org/grpc/grpclb/grpc_lb_v1",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "3Lt5hNAG8qJAYSsNghR5uA1zQns=",
+ "checksumSHA1": "ntHev01vgZgeIh5VFRmbLx/BSTo=",
"path": "google.golang.org/grpc/grpclog",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "pc9cweMiKQ5hVMuO9UoMGdbizaY=",
+ "path": "google.golang.org/grpc/health",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "T3Q0p8kzvXFnRkMaK/G8mCv6mc0=",
+ "checksumSHA1": "W5KfI1NIGJt7JaVnLzefDZr3+4s=",
+ "path": "google.golang.org/grpc/health/grpc_health_v1",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "U9vDe05/tQrvFBojOQX8Xk12W9I=",
"path": "google.golang.org/grpc/internal",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "hcuHgKp8W0wIzoCnNfKI8NUss5o=",
+ "path": "google.golang.org/grpc/keepalive",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "XXpD8+S3gLrfmCLOf+RbxblOQkU=",
+ "checksumSHA1": "N++Ur11m6Dq3j14/Hc2Kqmxroag=",
"path": "google.golang.org/grpc/metadata",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "4GSUFhOQ0kdFlBH4D5OTeKy78z0=",
+ "checksumSHA1": "bYKw8OIjj/ybY68eGqy7zqq6qmE=",
"path": "google.golang.org/grpc/naming",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "3RRoLeH6X2//7tVClOVzxW2bY+E=",
+ "checksumSHA1": "n5EgDdBqFMa2KQFhtl+FF/4gIFo=",
"path": "google.golang.org/grpc/peer",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "wzkOAxlah+y75EpH0QVgzb8hdfc=",
+ "checksumSHA1": "53Mbn2VqooOk47EWLHHFpKEOVwE=",
"path": "google.golang.org/grpc/stats",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
+ },
+ {
+ "checksumSHA1": "3Dwz4RLstDHMPyDA7BUsYe+JP4w=",
+ "path": "google.golang.org/grpc/status",
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "N0TftT6/CyWqp6VRi2DqDx60+Fo=",
+ "checksumSHA1": "aixGx/Kd0cj9ZlZHacpHe3XgMQ4=",
"path": "google.golang.org/grpc/tap",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
- "checksumSHA1": "yHpUeGwKoqqwd3cbEp3lkcnvft0=",
+ "checksumSHA1": "S0qdJtlMimKlOrJ4aZ/pxO5uVwg=",
"path": "google.golang.org/grpc/transport",
- "revision": "50955793b0183f9de69bd78e2ec251cf20aab121",
- "revisionTime": "2017-01-11T19:10:52Z"
+ "revision": "7657092a1303cc5a6fa3fee988d57c665683a4da",
+ "revisionTime": "2017-08-09T21:16:03Z"
},
{
"checksumSHA1": "fALlQNY1fM99NesfLJ50KguWsio=",
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/backends/types/azure.html.md b/vendor/github.com/hashicorp/terraform/website/docs/backends/types/azure.html.md
index 972a6dfb..043bcb15 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/backends/types/azure.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/docs/backends/types/azure.html.md
@@ -49,6 +49,7 @@ The following configuration options are supported:
* `key` - (Required) The key where to place/look for state file inside the container
* `access_key` / `ARM_ACCESS_KEY` - (Required) Storage account access key
* `lease_id` / `ARM_LEASE_ID` - (Optional) If set, will be used when writing to storage blob.
+ * `resource_group_name` - (Optional) The name of the resource group for the storage account. Required if `access_key` isn't specified.
* `environment` / `ARM_ENVIRONMENT` - (Optional) The cloud environment to use. Supported values are:
* `public` (default)
* `usgovernment`
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/commands/import.html.md b/vendor/github.com/hashicorp/terraform/website/docs/commands/import.html.md
index 4862f87e..2662b72e 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/commands/import.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/docs/commands/import.html.md
@@ -53,12 +53,12 @@ The command-line flags are all optional. The list of available flags are:
provider based on the prefix of the resource being imported. You usually
don't need to specify this.
-* `-state=path` - The path to read and save state files (unless state-out is
- specified). Ignored when [remote state](/docs/state/remote.html) is used.
+* `-state=path` - Path to the source state file to read from. Defaults to the
+ configured backend, or "terraform.tfstate".
-* `-state-out=path` - Path to write the final state file. By default, this is
- the state path. Ignored when [remote state](/docs/state/remote.html) is
- used.
+* `-state-out=path` - Path to the destination state file to write to. If this
+ isn't specified the source state file will be used. This can be a new or
+ existing path.
* `-var 'foo=bar'` - Set a variable in the Terraform configuration. This flag
can be set multiple times. Variable values are interpreted as
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/commands/init.html.markdown b/vendor/github.com/hashicorp/terraform/website/docs/commands/init.html.markdown
index 851aa8e3..8c50b28e 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/commands/init.html.markdown
+++ b/vendor/github.com/hashicorp/terraform/website/docs/commands/init.html.markdown
@@ -56,7 +56,7 @@ By default, `terraform init` assumes that the working directory already
contains a configuration and will attempt to initialize that configuration.
Optionally, init can be run against an empty directory with the
-`-with-module=MODULE-SOURCE` option, in which case the given module will be
+`-from-module=MODULE-SOURCE` option, in which case the given module will be
copied into the target directory before any other initialization steps are
run.
@@ -130,11 +130,17 @@ versions that comply with the version constraints given in configuration.
To skip plugin installation, use `-get-plugins=false`.
The automatic plugin installation behavior can be overridden by extracting
-the desired providers into a local directory and using the additonal option
+the desired providers into a local directory and using the additional option
`-plugin-dir=PATH`. When this option is specified, _only_ the given directory
is consulted, which prevents Terraform from making requests to the plugin
repository or looking for plugins in other local directories.
+Custom plugins can be used along with automatically installed plugins by
+placing them in `terraform.d/plugins/OS_ARCH/` inside the directory being
+initialized. Plugins found here will take precedence if they meet the required
+constraints in the configuration. The `init` command will continue to
+automatically download other plugins as needed.
+
When plugins are automatically downloaded and installed, by default the
contents are verified against an official HashiCorp release signature to
ensure that they were not corrupted or tampered with during download. It is
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/commands/state/mv.html.md b/vendor/github.com/hashicorp/terraform/website/docs/commands/state/mv.html.md
index ee58bcf9..abccef9b 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/commands/state/mv.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/docs/commands/state/mv.html.md
@@ -40,19 +40,22 @@ in [resource addressing format](/docs/commands/state/addressing.html).
The command-line flags are all optional. The list of available flags are:
-* `-backup=path` - Path to a backup file Defaults to the state path plus
- a timestamp with the ".backup" extension.
-
-* `-backup-out=path` - Path to the backup file for the output state.
- This is only necessary if `-state-out` is specified.
-
-* `-state=path` - Path to the state file. Defaults to "terraform.tfstate".
- Ignored when [remote state](/docs/state/remote.html) is used.
-
-* `-state-out=path` - Path to the state file to write to. If this isn't specified
- the state specified by `-state` will be used. This can be
- a new or existing path. Ignored when
- [remote state](/docs/state/remote.html) is used.
+* `-backup=path` - Path where Terraform should write the backup for the
+ original state. This can't be disabled. If not set, Terraform will write it
+ to the same path as the statefile with a ".backup" extension.
+
+* `-backup-out=path` - Path where Terraform should write the backup for the
+ destination state. This can't be disabled. If not set, Terraform will write
+ it to the same path as the destination state file with a backup extension.
+ This only needs to be specified if -state-out is set to a different path than
+ -state.
+
+* `-state=path` - Path to the source state file to read from. Defaults to the
+ configured backend, or "terraform.tfstate".
+
+* `-state-out=path` - Path to the destination state file to write to. If this
+ isn't specified the source state file will be used. This can be a new or
+ existing path.
## Example: Rename a Resource
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/commands/state/rm.html.md b/vendor/github.com/hashicorp/terraform/website/docs/commands/state/rm.html.md
index a8ddb9a1..f9031239 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/commands/state/rm.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/docs/commands/state/rm.html.md
@@ -17,7 +17,7 @@ and more.
Usage: `terraform state rm [options] ADDRESS...`
-The command will remove all the items matched by the addresses given.
+Remove one or more items from the Terraform state.
Items removed from the Terraform state are _not physically destroyed_.
Items removed from the Terraform state are only no longer managed by
@@ -43,10 +43,13 @@ in [resource addressing format](/docs/commands/state/addressing.html).
The command-line flags are all optional. The list of available flags are:
-* `-backup=path` - Path to a backup file Defaults to the state path plus
- a timestamp with the ".backup" extension.
+* `-backup=path` - Path where Terraform should write the backup state. This
+ can't be disabled. If not set, Terraform will write it to the same path as
+ the statefile with a backup extension.
-* `-state=path` - Path to the state file. Defaults to "terraform.tfstate".
+* `-state=path` - Path to a Terraform state file to use to look up
+ Terraform-managed resources. By default it will use the configured backend,
+ or the default "terraform.tfstate" if it exists.
## Example: Remove a Resource
diff --git a/vendor/github.com/hashicorp/terraform/website/docs/plugins/provider.html.md b/vendor/github.com/hashicorp/terraform/website/docs/plugins/provider.html.md
index fe1dad59..5921e341 100644
--- a/vendor/github.com/hashicorp/terraform/website/docs/plugins/provider.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/docs/plugins/provider.html.md
@@ -240,7 +240,7 @@ which cover all available settings.
We recommend viewing schemas of existing or similar providers to learn
best practices. A good starting place is the
-[core Terraform providers](https://github.com/hashicorp/terraform/tree/master/builtin/providers).
+[core Terraform providers](https://github.com/terraform-providers).
## Resource Data
diff --git a/vendor/github.com/hashicorp/terraform/website/guides/running-terraform-in-automation.html.md b/vendor/github.com/hashicorp/terraform/website/guides/running-terraform-in-automation.html.md
index f56e6ff6..8e75cd50 100644
--- a/vendor/github.com/hashicorp/terraform/website/guides/running-terraform-in-automation.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/guides/running-terraform-in-automation.html.md
@@ -104,7 +104,7 @@ such an automation setup:
and CPU architecture as where it was created. For example, this means that
it is not possible to create a plan on a Windows computer and then apply it
on a Linux server.
-* Terraform expects the provider plugins that were used used to produce a
+* Terraform expects the provider plugins that were used to produce a
plan to be available and identical when the plan is applied, to ensure
that the plan is interpreted correctly. An error will be produced if
Terraform or any plugins are upgraded between creating and applying a plan.
@@ -282,6 +282,11 @@ use of newer plugin versions that have not yet been installed into the
local plugin directory. Which approach is more appropriate will depend on
unique constraints within each organization.
+Plugins can also be provided along with the configuration by creating a
+`terraform.d/plugins/OS_ARCH` directory, which will be searched before
+automatically downloading additional plugins. The `-get-plugins=false` flag can
+be used to prevent Terraform from automatically downloading additional plugins.
+
## Terraform Enterprise
As an alternative to home-grown automation solutions, Hashicorp offers
diff --git a/vendor/github.com/hashicorp/terraform/website/guides/terraform-provider-development-program.html.md b/vendor/github.com/hashicorp/terraform/website/guides/terraform-provider-development-program.html.md
deleted file mode 100644
index 18e405e4..00000000
--- a/vendor/github.com/hashicorp/terraform/website/guides/terraform-provider-development-program.html.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-layout: "guides"
-page_title: "Terraform Provider Development Program"
-sidebar_current: "guides-terraform-provider-development-program"
-description: |-
- This guide provides steps to create a provider and apply for inclusing with
- Terraform, in order for Vendors to have their platform supported by Terraform.
----
-#Terraform Provider Development Program
-
-## Introduction
-Terraform is used to create, manage, and manipulate infrastructure resources. Examples of resources include physical machines, VMs, network switches, containers, etc. Almost any infrastructure noun can be represented as a resource in Terraform.
-
-Terraform can broadly be divided into two parts – the Terraform core, which consists of the core functionality, and a provider layer, which provides a translation layer between Terraform core and the underlying infrastructure. A provider is responsible for understanding API interactions with the underlying infrastructure like a cloud (AWS, GCP, Azure), a PaaS service (Heroku), a SaaS (service DNSimple, CloudFlare), or on-prem resources (vSphere). It then exposes these as resources users can code to. Terraform presently supports more than 70 providers, a number that has more than doubled in the past 12 months.
-
-~> **NOTE:** This document is intended for vendors and users who would like to build a Terraform provider to have their infrastructure supported via terraform. The program is intended to be largely self-serve, with links to information sources, clearly defined steps, and checkpoints. This being said, we welcome you to contact us at <terraform-provider-dev@hashicorp.com> with any questions, or feedback.
-
-
-## Provider Development Process
-The Terraform provider development process can broadly be divided into the six steps described below.
-
-![Process](docs/process.png)
-
-1. Engage: Initial contact between vendor and HashiCorp
-2. Enable: Information and articles to aid with the provider development
-3. Dev/Test: Provider development and test process
-4. Review: HashiCorp code review and acceptance tests (iterative process)
-5. Release: Provider availability and listing on [terraform.io](https://www.terraform.io)
-6. Support: Ongoing maintenance and support of the provider by the vendor.
-
-Each of these steps are described in detail below. Vendors are encouraged to follow the tasks associated with each step to the fullest as it helps streamline the process and minimize rework.
-
-### 1. Engage
-Each new provider development cycle begins with the vendor providing some basic information about the infrastructure the provider is being built for, the name of the provider, relevant details about the project, via a simple [webform](https://goo.gl/forms/iqfz6H9UK91X9LQp2) (https://goo.gl/forms/iqfz6H9UK91X9LQp2). This information is captured upfront and used for consistently tracking the provider through the various steps.
-
-All providers integrate into and operate with Terraform exactly the same way. The table below is intended to help users understand who develops, maintains and tests a particular provider. All new providers should align to one of these two tiers.
-
-![Engage-table](docs/engage-table.png)
-
-### 2. Enable
-In order to get started with the Terraform provider development process we recommend reviewing and following the articles included in the Provider Development Kit.
-
-Provider Development Kit:
-
-* Writing custom providers [guide](https://www.terraform.io/guides/writing-custom-terraform-providers.html)
-* How-to build a provider [video](https://www.youtube.com/watch?v=2BvpqmFpchI)
-* Sample provider developed by [partner](http://container-solutions.com/write-terraform-provider-part-1/)
-* Example providers for reference: [AWS](https://github.com/terraform-providers/terraform-provider-aws), [OPC](https://github.com/terraform-providers/terraform-provider-opc)
-* Contributing to Terraform [guidelines](https://github.com/hashicorp/terraform/blob/master/.github/CONTRIBUTING.md)
-* Gitter HashiCorp-Terraform [room](https://gitter.im/hashicorp-terraform/Lobby).
-
-We’ve found the provider development to be fairly straightforward and simple when vendors pay close attention and follow to the above articles. Adopting the same structure and coding patterns helps expedite the review and release cycles.
-
-### 3. Development & Test
-The Terraform provider is written in the [Go](https://golang.org/) programming language. The best approach to architect a new provider project is to use the [AWS provider](https://github.com/terraform-providers/terraform-provider-aws) as a reference. Given the wide surface area of this provider, almost all resource types and preferred code constructs are covered in it.
-
-It is recommended for vendors to first develop support for one or two resources and go through an initial review cycle before developing the code for the remaining resources. This helps catch any issues early on in the process and avoids errors from getting multiplied. In addition, it is advised to follow existing conventions you see in the codebase, and ensure your code is formatted with go fmt. This is needed as our TravisCI continuous Integration (CI) build will fail if go fmt has not been run on the code.
-
-The provider code should include an acceptance test suite with tests for each individual resource that holistically tests its behavior. The Writing Acceptance Tests section in the [Contributing to Terraform](https://github.com/hashicorp/terraform/blob/master/.github/CONTRIBUTING.md) document explains how to approach these. It is recommended to randomize the names of the tests as opposed to using unique static names, as that permits us to parallelize the test execution.
-
-
-Another common problem is that those tests can only run one at a time (because of unique static names of resources).
-Randomized names is what we should encourage people to do.
-
-
-Each provider has a section in the Terraform documentation. You'll want to add new index file and individual pages for each resource supported by the provider.
-
-While developing the provider code yourself is certainly possible, you can also choose to leverage one of the following development agencies who’ve developed Terraform providers in the past and are familiar with the requirements and process.
-
-| Partner | Website | Email |
-|:-------------------|:-----------------------------|:---------------------|
-| Crest Data Systems | malhar@crestdatasys.com | www.crestdatasys.com |
-| DigitalOnUs | hashicorp@digitalonus.com | www.digitalonus.com |
-| MustWin | bd@mustwin.com | www.mustwin.com |
-| OpenCredo | guy.richardson@opencredo.com | www.opencredo.com |
-
-### 4. Review
-Once the provider with one or two sample resources has been developed, an email should be sent to <terraform-provider-dev@hashicorp.com> along with a pointer to the public GitHub repo containing the code. HashiCorp will then review the resource code, acceptance tests, and the documentation for the sample resource(s) will be reviewed. When all the feedback has been addressed, support for the remaining resources can continue to be developed, along with the corresponding acceptance tests and documentation. The vendor is encouraged to send HashiCorp a rough list of resource names that are planned to be worked on along with the mapping to the underlying APIs, if possible. This information can be provided via the [webform](https://goo.gl/forms/iqfz6H9UK91X9LQp2). It is preferred that the additional resources be developed and submitted as individual PRs in GitHub as that simplifies the review process.
-
-Once the provider has been completed another email should be sent to <terraform-provider-dev@hashicorp.com> along with a pointer to the public GitHub repo containing the code requesting the final code review. HashiCorp will review the code and provide feedback about any changes that may be required. This is often an iterative process and can take some time to get done.
-
-The vendor is also required to provide access credentials for the infrastructure (cloud or other) that is managed by the provider. Please encrypt the credentials using our public GPG key published at keybase.io/terraform (you can use the form at https://keybase.io/encrypt#terraform) and paste the encrypted message into the [webform](https://goo.gl/forms/iqfz6H9UK91X9LQp2). Please do NOT enter plain-text credentials. These credentials are used during the review phase, as well as to test the provider as part of the regular testing HashiCorp conducts.
-
->
-NOTE: It is strongly recommended to develop support for just one or two resources first and go through the review cycle before developing support for all the remaining resources. This approach helps catch any code construct issues early, and avoids the problem from multiplying across other resources. In addition, one of the common gaps is often the lack of a complete set of acceptance tests, which results in wasted time. It is recommended that you make an extra pass through the provider code and ensure that each resource has an acceptance test associated with it.
-
-### 5. Release
-At this stage, it is expected that the provider is fully developed, all tests and documentation are in place, and the acceptance tests are all passing.
-
-
-HashiCorp will create a new GitHub repo under the terraform-providers GitHub organization for the new provider (example: terraform-providers/terraform-provider-_name_) and grant the owner of the original provider code write access to the new repo. A GitHub Pull Request should be created against this new repo with the provider code that had been reviewed in step-4 above. Once this is done HashiCorp will review and merge the PR, and get the new provider listed on [terraform.io](https://www.terraform.io). This is also when the provider acceptance tests are added to the HashiCorp test harness (TeamCity) and tested at regular intervals.
-
-
-Vendors whose providers are listed on terraform.io are permitted to use the HashiCorp Tested logo for their provider.
-
-<img alt="hashicorp-tested-icon" src="/assets/images/docs/hashicorp-tested-icon.png" style="width: 101px;" />
-
-### 6. Support
-Many vendors view the Release step above to be the end of the journey, while at HashiCorp we view it to be the start. Getting the provider built is just the first step in enabling users to use it against the infrastructure. Once this is done on-going effort is required to maintain the provider and address any issues in a timely manner. The expectation is to resolve all Critical issues within 48 hours and all other issues within 5 business days. HashiCorp Terraform has as extremely wide community of users and contributors and we encourage everyone to report issues however small, as well as help resolve them when possible.
-
-Vendors who choose to not support their provider and prefer to make it a community supported provider will not be listed on terraform.io.
-
-## Next Steps
-Below is an ordered checklist of steps that should be followed during the provider development process.
-
-[ ] Fill out provider development program engagement [webform](https://goo.gl/forms/iqfz6H9UK91X9LQp2) (https://goo.gl/forms/iqfz6H9UK91X9LQp2)
-
-[ ] Refer to the example providers and model the new provider based on that
-
-[ ] Create the new provider with one or two sample resources along with acceptance tests and documentation
-
-[ ] Send email to <terraform-provider-dev@hashicorp.com> to schedule an initial review
-
-[ ] Address review feedback and develop support for the other resources
-
-[ ] Send email to <terraform-provider-dev@hashicorp.com> along with a pointer to the public GitHub repo containing the final code
-
-[ ] Provide HashiCorp with credentials for underlying infrastructure managed by the new provider via the [webform](https://goo.gl/forms/iqfz6H9UK91X9LQp2)
-
-[ ] Address all review feedback, ensure that each resource has a corresponding acceptance test, and the documentation is complete
-
-[ ] Create a PR for the provider against the HashiCorp provided empty repo.
-
-[ ] Plan to continue supporting the provider with additional functionality as well as addressing any open issues.
-
-
-In this document we’ve covered the process for getting a Terraform provider created. For any questions or feedback please contact us at <terraform-provider-dev@hashicorp.com>.
diff --git a/vendor/github.com/hashicorp/terraform/website/guides/writing-custom-terraform-providers.html.md b/vendor/github.com/hashicorp/terraform/website/guides/writing-custom-terraform-providers.html.md
index 9088f159..ffcdfe34 100644
--- a/vendor/github.com/hashicorp/terraform/website/guides/writing-custom-terraform-providers.html.md
+++ b/vendor/github.com/hashicorp/terraform/website/guides/writing-custom-terraform-providers.html.md
@@ -40,7 +40,7 @@ This post assumes familiarity with Golang and basic programming concepts.
As a reminder, all of Terraform's core providers are open source. When stuck or
looking for examples, please feel free to reference
-[the open source providers](https://github.com/hashicorp/terraform/tree/master/builtin/providers) for help.
+[the open source providers](https://github.com/terraform-providers) for help.
## The Provider Schema
diff --git a/vendor/github.com/hashicorp/terraform/website/intro/examples/consul.html.markdown b/vendor/github.com/hashicorp/terraform/website/intro/examples/consul.html.markdown
index 78d3f63d..23602b77 100644
--- a/vendor/github.com/hashicorp/terraform/website/intro/examples/consul.html.markdown
+++ b/vendor/github.com/hashicorp/terraform/website/intro/examples/consul.html.markdown
@@ -8,7 +8,7 @@ description: |-
# Consul Example
-[**Example Source Code**](https://github.com/hashicorp/terraform/tree/master/examples/consul)
+[**Example Source Code**](https://github.com/terraform-providers/terraform-provider-consul/tree/master/examples/kv)
[Consul](https://www.consul.io) is a tool for service discovery, configuration
and orchestration. The Key/Value store it provides is often used to store
diff --git a/vendor/github.com/hashicorp/terraform/website/intro/examples/count.markdown b/vendor/github.com/hashicorp/terraform/website/intro/examples/count.markdown
index 96a11e1f..e6064924 100644
--- a/vendor/github.com/hashicorp/terraform/website/intro/examples/count.markdown
+++ b/vendor/github.com/hashicorp/terraform/website/intro/examples/count.markdown
@@ -8,7 +8,7 @@ description: |-
# Count Example
-[**Example Source Code**](https://github.com/hashicorp/terraform/tree/master/examples/aws-count)
+[**Example Source Code**](https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/count)
The `count` parameter on resources can simplify configurations
and let you scale resources by simply incrementing a number.
diff --git a/vendor/github.com/hashicorp/terraform/website/intro/examples/index.html.markdown b/vendor/github.com/hashicorp/terraform/website/intro/examples/index.html.markdown
index 99df59b1..c4d60aea 100644
--- a/vendor/github.com/hashicorp/terraform/website/intro/examples/index.html.markdown
+++ b/vendor/github.com/hashicorp/terraform/website/intro/examples/index.html.markdown
@@ -8,7 +8,7 @@ description: |-
# Example Configurations
-These examples are designed to help you understand some
+The examples in this section illustrate some
of the ways Terraform can be used.
All examples are ready to run as-is. Terraform will
@@ -31,23 +31,21 @@ uses it isn't required.
## Examples
-All of the examples are in the
-["examples" directory within the Terraform source code](https://github.com/hashicorp/terraform/tree/master/examples). Each example (as well as the examples
-directory) has a README explaining the goal of the example.
+Our examples are distributed across several repos. [This README file in the Terraform repo has links to all of them.](https://github.com/hashicorp/terraform/tree/master/examples)
To use these examples, Terraform must first be installed on your machine.
You can install Terraform from the [downloads page](/downloads.html).
-Once installed, you can use two steps to view and run the examples.
+Once installed, you can download, view, and run the examples.
-To try these examples, first clone them with git as usual:
+To use an example, clone the repository that contains it and navigate to its directory. For example, to try the AWS two-tier architecture example:
```
-git clone https://github.com/hashicorp/terraform/examples/aws-two-tier
-cd aws-two-tier
+git clone https://github.com/terraform-providers/terraform-provider-aws.git
+cd terraform-provider-aws/examples/two-tier
```
-You can then use your own editor to read and browse the configurations.
-To try out the example, initialize and then apply:
+You can then use your preferred code editor to browse and read the configurations.
+To try out an example, run Terraform's init and apply commands while in the example's directory:
```
$ terraform init
diff --git a/vendor/github.com/hashicorp/terraform/website/layouts/guides.erb b/vendor/github.com/hashicorp/terraform/website/layouts/guides.erb
index 03a4d4d0..7825f68b 100644
--- a/vendor/github.com/hashicorp/terraform/website/layouts/guides.erb
+++ b/vendor/github.com/hashicorp/terraform/website/layouts/guides.erb
@@ -7,9 +7,6 @@
<li<%= sidebar_current("guides-running-terraform-in-automation") %>>
<a href="/guides/running-terraform-in-automation.html">Running Terraform in Automation</a>
</li>
- <li<%= sidebar_current("guides-terraform-provider-development-program") %>>
- <a href="/guides/terraform-provider-development-program.html">Terraform Provider Development Program</a>
- </li>
</ul>
<% end %>