Skip to content

Commit

Permalink
deployer: remove catalog/mesh v2 support (#21194)
Browse files Browse the repository at this point in the history
- Low level plumbing for resources is still retained for now.
- Retain "Workload" terminology over "Service".
- Revert "Destination" terminology back to "Upstream".
- Remove TPROXY support as it only worked for v2.
  • Loading branch information
rboyer authored May 21, 2024
1 parent 6d088db commit 50b26aa
Show file tree
Hide file tree
Showing 34 changed files with 398 additions and 2,454 deletions.
51 changes: 16 additions & 35 deletions test-integ/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ You can run the entire set of deployer integration tests using:

You can also run them one by one if you like:

go test ./catalogv2 -run TestBasicL4ExplicitDestinations -v
go test ./connect/ -run Test_Snapshot_Restore_Agentless -v

You can have the logs stream unbuffered directly to your terminal which can
help diagnose stuck tests that would otherwise need to fully timeout before the
Expand Down Expand Up @@ -65,26 +65,18 @@ These are comprised of 4 main parts:
- **Nodes**: A "box with ip address(es)". This should feel a bit like a VM or
a Kubernetes Pod as an enclosing entity.

- **Workloads**: The list of service instances (v1) or workloads
(v2) that will execute on the given node. v2 Services will
be implied by similarly named workloads here unless opted
out. This helps define a v1-compatible topology and
repurpose it for v2 without reworking it.
- **Workloads**: The list of service instances that will execute on the given node.

- **Services** (v2): v2 Service definitions to define explicitly, in addition
to the inferred ones.
- **InitialConfigEntries**: Config entries that should be created as
part of the fixture and that make sense to
include as part of the test definition, rather
than something created during the test assertion
phase.

- **InitialConfigEntries** (v1): Config entries that should be created as
part of the fixture and that make sense to
include as part of the test definition,
rather than something created during the
test assertion phase.

- **InitialResources** (v2): v2 Resources that should be created as part of
the fixture and that make sense to include as
part of the test definition, rather than
something created during the test assertion
phase.
- **InitialResources**: Resources that should be created as part of
the fixture and that make sense to include as part of
the test definition, rather than something created
during the test assertion phase.

- **Peerings**: The peering relationships between Clusters to establish.

Expand All @@ -102,15 +94,13 @@ a variety of axes:
- agentful (clients) vs agentless (dataplane)
- tenancies (partitions, namespaces)
- locally or across a peering
- catalog v1 or v2 object model

Since the topology is just a declarative struct, a test author could rewrite
any one of these attributes with a single field (such as `Node.Kind` or
`Node.Version`) and cause the identical test to run against the other
configuration. With the addition of a few `if enterprise {}` blocks and `for`
loops, a test author could easily write one test of a behavior and execute it
to cover agentless, agentful, non-default tenancy, and v1/v2 in a few extra
lines of code.
any one of these attributes with a single field (such as `Node.Kind`) and cause
the identical test to run against the other configuration. With the addition of
a few `if enterprise {}` blocks and `for` loops, a test author could easily
write one test of a behavior and execute it to cover agentless, agentful, and
non-default tenancy in a few extra lines of code.

#### Non-optional security settings

Expand Down Expand Up @@ -197,12 +187,3 @@ and Envoy that you can create in your test:
asserter := topoutil.NewAsserter(sp)

asserter.UpstreamEndpointStatus(t, svc, clusterPrefix+".", "HEALTHY", 1)

## Examples

- `catalogv2`
- [Explicit L4 destinations](./catalogv2/explicit_destinations_test.go)
- [Implicit L4 destinations](./catalogv2/implicit_destinations_test.go)
- [Explicit L7 destinations with traffic splits](./catalogv2/explicit_destinations_l7_test.go)
- [`peering_commontopo`](./peering_commontopo)
- A variety of extensive v1 Peering tests.
8 changes: 4 additions & 4 deletions test-integ/connect/snapshot_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import (
// 1. The test spins up a one-server cluster with static-server and static-client.
// 2. A snapshot is taken and the cluster is restored from the snapshot
// 3. A new static-server replaces the old one
// 4. At the end, we assert the static-client's destination is updated with the
// 4. At the end, we assert the static-client's upstream is updated with the
// new static-server
func Test_Snapshot_Restore_Agentless(t *testing.T) {
t.Parallel()
Expand Down Expand Up @@ -89,7 +89,7 @@ func Test_Snapshot_Restore_Agentless(t *testing.T) {
"-http-port", "8080",
"-redirect-port", "-disabled",
},
Destinations: []*topology.Destination{
Upstreams: []*topology.Upstream{
{
ID: staticServerSID,
LocalPort: 5000,
Expand Down Expand Up @@ -153,7 +153,7 @@ func Test_Snapshot_Restore_Agentless(t *testing.T) {
topology.NewNodeID("dc1-client2", "default"),
staticClientSID,
)
asserter.FortioFetch2HeaderEcho(t, staticClient, &topology.Destination{
asserter.FortioFetch2HeaderEcho(t, staticClient, &topology.Upstream{
ID: staticServerSID,
LocalPort: 5000,
})
Expand Down Expand Up @@ -182,7 +182,7 @@ func Test_Snapshot_Restore_Agentless(t *testing.T) {
require.NoError(t, sp.Relaunch(cfg))

// Ensure the static-client connected to the new static-server
asserter.FortioFetch2HeaderEcho(t, staticClient, &topology.Destination{
asserter.FortioFetch2HeaderEcho(t, staticClient, &topology.Upstream{
ID: staticServerSID,
LocalPort: 5000,
})
Expand Down
10 changes: 5 additions & 5 deletions test-integ/peering_commontopo/ac1_basic_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ type ac1BasicSuite struct {
sidClientHTTP topology.ID
nodeClientHTTP topology.NodeID

upstreamHTTP *topology.Destination
upstreamTCP *topology.Destination
upstreamHTTP *topology.Upstream
upstreamTCP *topology.Upstream
}

var ac1BasicSuites []sharedTopoSuite = []sharedTopoSuite{
Expand Down Expand Up @@ -65,15 +65,15 @@ func (s *ac1BasicSuite) setup(t *testing.T, ct *commonTopo) {
Name: prefix + "server-http",
Partition: partition,
}
upstreamHTTP := &topology.Destination{
upstreamHTTP := &topology.Upstream{
ID: topology.ID{
Name: httpServerSID.Name,
Partition: partition,
},
LocalPort: 5001,
Peer: peer,
}
upstreamTCP := &topology.Destination{
upstreamTCP := &topology.Upstream{
ID: topology.ID{
Name: tcpServerSID.Name,
Partition: partition,
Expand All @@ -93,7 +93,7 @@ func (s *ac1BasicSuite) setup(t *testing.T, ct *commonTopo) {
clu.Datacenter,
sid,
func(s *topology.Workload) {
s.Destinations = []*topology.Destination{
s.Upstreams = []*topology.Upstream{
upstreamTCP,
upstreamHTTP,
}
Expand Down
15 changes: 8 additions & 7 deletions test-integ/peering_commontopo/ac2_disco_chain_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@ import (
"fmt"
"testing"

"github.com/stretchr/testify/require"

"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/testing/deployer/topology"
"github.com/stretchr/testify/require"
)

type ac2DiscoChainSuite struct {
Expand Down Expand Up @@ -84,7 +85,7 @@ func (s *ac2DiscoChainSuite) setup(t *testing.T, ct *commonTopo) {
ct.AddServiceNode(clu, serviceExt{Workload: server})

// Define server as upstream for client
upstream := &topology.Destination{
upstream := &topology.Upstream{
ID: topology.ID{
Name: server.ID.Name,
Partition: partition, // TODO: iterate over all possible partitions
Expand All @@ -105,7 +106,7 @@ func (s *ac2DiscoChainSuite) setup(t *testing.T, ct *commonTopo) {
clu.Datacenter,
clientSID,
func(s *topology.Workload) {
s.Destinations = []*topology.Destination{
s.Upstreams = []*topology.Upstream{
upstream,
}
},
Expand Down Expand Up @@ -164,8 +165,8 @@ func (s *ac2DiscoChainSuite) test(t *testing.T, ct *commonTopo) {
require.Len(t, svcs, 1, "expected exactly one client in datacenter")

client := svcs[0]
require.Len(t, client.Destinations, 1, "expected exactly one upstream for client")
u := client.Destinations[0]
require.Len(t, client.Upstreams, 1, "expected exactly one upstream for client")
u := client.Upstreams[0]

t.Run("peered upstream exists in catalog", func(t *testing.T) {
t.Parallel()
Expand All @@ -176,7 +177,7 @@ func (s *ac2DiscoChainSuite) test(t *testing.T, ct *commonTopo) {

t.Run("peered upstream endpoint status is healthy", func(t *testing.T) {
t.Parallel()
ct.Assert.DestinationEndpointStatus(t, client, peerClusterPrefix(u), "HEALTHY", 1)
ct.Assert.UpstreamEndpointStatus(t, client, peerClusterPrefix(u), "HEALTHY", 1)
})

t.Run("response contains header injected by splitter", func(t *testing.T) {
Expand All @@ -196,7 +197,7 @@ func (s *ac2DiscoChainSuite) test(t *testing.T, ct *commonTopo) {
// func (s *ResourceGenerator) getTargetClusterName
//
// and connect/sni.go
func peerClusterPrefix(u *topology.Destination) string {
func peerClusterPrefix(u *topology.Upstream) string {
if u.Peer == "" {
panic("upstream is not from a peer")
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,15 @@ import (
"testing"
"time"

"github.com/itchyny/gojq"
"github.com/stretchr/testify/require"

"github.com/hashicorp/go-cleanhttp"

"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/sdk/testutil/retry"
libassert "github.com/hashicorp/consul/test/integration/consul-container/libs/assert"
"github.com/hashicorp/consul/testing/deployer/topology"
"github.com/hashicorp/go-cleanhttp"
"github.com/itchyny/gojq"
"github.com/stretchr/testify/require"
)

var ac3SvcDefaultsSuites []sharedTopoSuite = []sharedTopoSuite{
Expand All @@ -40,7 +42,7 @@ type ac3SvcDefaultsSuite struct {
sidClient topology.ID
nodeClient topology.NodeID

upstream *topology.Destination
upstream *topology.Upstream
}

func (s *ac3SvcDefaultsSuite) testName() string {
Expand All @@ -60,7 +62,7 @@ func (s *ac3SvcDefaultsSuite) setup(t *testing.T, ct *commonTopo) {
Name: "ac3-server",
Partition: partition,
}
upstream := &topology.Destination{
upstream := &topology.Upstream{
ID: topology.ID{
Name: serverSID.Name,
Partition: partition,
Expand All @@ -78,7 +80,7 @@ func (s *ac3SvcDefaultsSuite) setup(t *testing.T, ct *commonTopo) {
clu.Datacenter,
sid,
func(s *topology.Workload) {
s.Destinations = []*topology.Destination{
s.Upstreams = []*topology.Upstream{
upstream,
}
},
Expand Down Expand Up @@ -183,7 +185,7 @@ func (s *ac3SvcDefaultsSuite) test(t *testing.T, ct *commonTopo) {
// TODO: what is default? namespace? partition?
clusterName := fmt.Sprintf("%s.default.%s.external", s.upstream.ID.Name, s.upstream.Peer)
nonceStatus := http.StatusInsufficientStorage
url507 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", svcClient.ExposedPort(""),
url507 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", svcClient.ExposedPort(),
url.QueryEscape(fmt.Sprintf("http://localhost:%d/?status=%d", s.upstream.LocalPort, nonceStatus)),
)

Expand Down Expand Up @@ -219,7 +221,7 @@ func (s *ac3SvcDefaultsSuite) test(t *testing.T, ct *commonTopo) {
require.True(r, resultAsBool)
})

url200 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", svcClient.ExposedPort(""),
url200 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", svcClient.ExposedPort(),
url.QueryEscape(fmt.Sprintf("http://localhost:%d/", s.upstream.LocalPort)),
)
retry.RunWith(&retry.Timer{Timeout: time.Minute * 1, Wait: time.Millisecond * 500}, t, func(r *retry.R) {
Expand Down
20 changes: 11 additions & 9 deletions test-integ/peering_commontopo/ac4_proxy_defaults_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,12 @@ import (
"net/url"
"testing"

"github.com/stretchr/testify/require"

"github.com/hashicorp/go-cleanhttp"

"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/testing/deployer/topology"
"github.com/hashicorp/go-cleanhttp"
"github.com/stretchr/testify/require"
)

type ac4ProxyDefaultsSuite struct {
Expand All @@ -24,7 +26,7 @@ type ac4ProxyDefaultsSuite struct {

serverSID topology.ID
clientSID topology.ID
upstream *topology.Destination
upstream *topology.Upstream
}

var ac4ProxyDefaultsSuites []sharedTopoSuite = []sharedTopoSuite{
Expand Down Expand Up @@ -54,7 +56,7 @@ func (s *ac4ProxyDefaultsSuite) setup(t *testing.T, ct *commonTopo) {
Partition: partition,
}
// Define server as upstream for client
upstream := &topology.Destination{
upstream := &topology.Upstream{
ID: serverSID,
LocalPort: 5000,
Peer: peer,
Expand All @@ -70,7 +72,7 @@ func (s *ac4ProxyDefaultsSuite) setup(t *testing.T, ct *commonTopo) {
clu.Datacenter,
clientSID,
func(s *topology.Workload) {
s.Destinations = []*topology.Destination{
s.Upstreams = []*topology.Upstream{
upstream,
}
},
Expand Down Expand Up @@ -165,11 +167,11 @@ func (s *ac4ProxyDefaultsSuite) test(t *testing.T, ct *commonTopo) {
dcSvcs := dc.WorkloadsByID(s.clientSID)
require.Len(t, dcSvcs, 1, "expected exactly one client")
client = dcSvcs[0]
require.Len(t, client.Destinations, 1, "expected exactly one upstream for client")
require.Len(t, client.Upstreams, 1, "expected exactly one upstream for client")

server := dc.WorkloadsByID(s.serverSID)
require.Len(t, server, 1, "expected exactly one server")
require.Len(t, server[0].Destinations, 0, "expected no upstream for server")
require.Len(t, server[0].Upstreams, 0, "expected no upstream for server")
})

t.Run("peered upstream exists in catalog", func(t *testing.T) {
Expand All @@ -179,11 +181,11 @@ func (s *ac4ProxyDefaultsSuite) test(t *testing.T, ct *commonTopo) {
})

t.Run("HTTP service fails due to connection timeout", func(t *testing.T) {
url504 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", client.ExposedPort(""),
url504 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", client.ExposedPort(),
url.QueryEscape(fmt.Sprintf("http://localhost:%d/?delay=1000ms", s.upstream.LocalPort)),
)

url200 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", client.ExposedPort(""),
url200 := fmt.Sprintf("http://localhost:%d/fortio/fetch2?url=%s", client.ExposedPort(),
url.QueryEscape(fmt.Sprintf("http://localhost:%d/", s.upstream.LocalPort)),
)

Expand Down
11 changes: 6 additions & 5 deletions test-integ/peering_commontopo/ac6_failovers_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ import (
"fmt"
"testing"

"github.com/stretchr/testify/require"

"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/test/integration/consul-container/libs/utils"
"github.com/hashicorp/consul/testing/deployer/sprawl"
"github.com/hashicorp/consul/testing/deployer/topology"
"github.com/stretchr/testify/require"
)

type ac6FailoversSuite struct {
Expand Down Expand Up @@ -347,8 +348,8 @@ func (s *ac6FailoversSuite) setup(t *testing.T, ct *commonTopo) {
nearClu.Datacenter,
clientSID,
func(s *topology.Workload) {
// Destination per partition
s.Destinations = []*topology.Destination{
// Upstream per partition
s.Upstreams = []*topology.Upstream{
{
ID: topology.ID{
Name: nearServerSID.Name,
Expand Down Expand Up @@ -437,8 +438,8 @@ func (s *ac6FailoversSuite) test(t *testing.T, ct *commonTopo) {
require.Len(t, svcs, 1, "expected exactly one client in datacenter")

client := svcs[0]
require.Len(t, client.Destinations, 1, "expected one upstream for client")
upstream := client.Destinations[0]
require.Len(t, client.Upstreams, 1, "expected one upstream for client")
upstream := client.Upstreams[0]

fmt.Println("### preconditions")

Expand Down
Loading

0 comments on commit 50b26aa

Please sign in to comment.