Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
172 changes: 172 additions & 0 deletions CODEBUDDY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
# OpenShift Origin - CodeBuddy Guide

This repository maintains the `openshift-tests` binary for OpenShift, focusing on extended end-to-end testing. It previously also maintained `hyperkube` binaries, but that responsibility has transitioned to the `openshift/kubernetes` repository.

## Repository Purpose

- **Primary Purpose**: Maintains the `openshift-tests` binary for OKD (OpenShift Origin)
- **Branches**:
- `main` and `release-4.x` branches (4.6+): Only maintain `openshift-tests`
- `release-4.5` and earlier: Continue to maintain hyperkube
- **Key Binary**: `openshift-tests` - compiled test binary containing all end-to-end tests

## Development Commands

### Building
```bash
# Build the main test binary
make

# Build specific target (openshift-tests)
make openshift-tests

# Alternative build method
hack/build-go.sh cmd/openshift-tests
```

### Testing
```bash
# Run all tests (uses conformance suite by default)
make test

# Run specific test suite
make test SUITE=core

# Run tests with focus
make test SUITE=conformance FOCUS=pods

# Run tools tests
make test-tools

# Run extended tests directly
openshift-tests run-test <FULL_TEST_NAME>
```

### Verification & Generation
```bash
# Run all verification checks
make verify

# Run origin-specific verification
make verify-origin

# Update all generated artifacts
make update

# Update TLS artifacts
make update-tls-ownership

# Update external examples
make update-examples

# Update bindata
make update-bindata
```

### Test Management
```bash
# List available test suites
openshift-tests help run

# Dry run to see test matches
openshift-tests run all --dry-run | grep -E "<REGEX>"

# Run filtered tests
openshift-tests run all --dry-run | grep -E "<REGEX>" | openshift-tests run -f -
```

## Key Dependencies and Vendoring

### Kubernetes Dependency
This repository vendors Kubernetes components from `openshift/kubernetes` fork:
- Uses custom `replace` directives in `go.mod` to point to OpenShift's fork
- Most staging repos (`k8s.io/api`, `k8s.io/apimachinery`, etc.) come from `openshift/kubernetes`

### Vendoring Updates
```bash
# Update Kubernetes vendor from openshift/kubernetes
hack/update-kube-vendor.sh <branch-name-or-SHA>

# Workaround for '410 Gone' errors
GOSUMDB=off hack/update-kube-vendor.sh <branch-name-or-SHA>

# Fake bump for unmerged changes
hack/update-kube-vendor.sh <branch-name> github.com/myname/kubernetes
```

## Architecture

### Directory Structure
- `cmd/openshift-tests/`: Main test binary entry point
- `pkg/`: Core packages including monitoring, test suites, and utilities
- `test/extended/`: End-to-end tests organized by functionality
- `hack/`: Build and development scripts
- `examples/`: Sample configurations and templates
- `vendor/`: Dependencies (mostly from OpenShift Kubernetes fork)

### Test Organization
- **Extended Tests**: Live under `test/extended/` with subdirectories by functionality
- **Test Labels**: Use Kubernetes e2e test conventions:
- `[Serial]`: Tests that cannot run in parallel
- `[Slow]`: Tests taking >5 minutes
- `[Conformance]`: Core functionality tests
- `[Local]`: Tests requiring local host access

### Key Components
- **Test Framework**: Uses Ginkgo for BDD-style testing
- **CLI Interface**: `exutil.NewCLI()` provides `oc` command simulation
- **Test Data**: JSON/YAML fixtures in `test/extended/testdata/`
- **Utilities**: Shared helpers in `test/extended/util/`

## Development Workflow

### Adding New Tests
1. Place tests in appropriate `test/extended/<category>/` directory
2. Use proper test labels (Serial, Slow, Conformance, Local)
3. Follow existing patterns using `exutil.NewCLI()` for `oc` commands
4. Use `test/extended/testdata/` for fixtures

### Test Exclusion Rules
- **Kubernetes e2e tests**: Managed in `openshift/kubernetes`
- **OpenShift e2e tests**: Managed in this repository (`pkg/test/extensions/`)

### Code Generation
- Uses `build-machinery-go` for build infrastructure
- Bindata generation for embedded examples and test data
- Automatic code generation via `make update`

## Build System

### Makefile Structure
- Uses `openshift/build-machinery-go` for standardized build targets
- Custom targets for origin-specific operations
- Image building support for test containers

### Environment Variables
- `JUNIT_REPORT=true`: Enable jUnit output for CI
- `IMAGE_REGISTRY=registry.ci.openshift.org`: Default image registry

## Testing Conventions

### Extended Test Structure
- Test files organized by functionality (builds, images, storage, etc.)
- Each test group can have custom launcher scripts if needed
- Use `g.Describe()` with descriptive bucket names

### CLI Integration
- Tests use `exutil.NewCLI("test-name")` to create CLI instances
- Chain commands: `oc.Run("create").Args("-f", fixture).Execute()`
- Use `By()` statements to document test steps

## Important Notes

- This repository is focused on **testing infrastructure** not production binaries
- Most changes to Kubernetes functionality should go to `openshift/kubernetes` first
- Test exclusion rules are split between this repo and `openshift/kubernetes`
- The `openshift-tests` binary is the primary output artifact

## Resources

- [Extended Test README](test/extended/README.md)
- [OpenShift Kubernetes Fork](https://github.com/openshift/kubernetes)
- [Test Exclusion Rules](pkg/test/extensions/)
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ import (
utilerrors "k8s.io/apimachinery/pkg/util/errors"

"github.com/openshift/origin/pkg/monitortestlibrary/disruptionlibrary"
"github.com/openshift/origin/pkg/test/extensions"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"

Expand Down Expand Up @@ -76,6 +77,9 @@ type InvariantInClusterDisruption struct {
replicas int32
controlPlaneNodes int32

isHypershift bool
isAROHCPCluster bool
isBareMetalHypershift bool
adminRESTConfig *rest.Config
kubeClient kubernetes.Interface
}
Expand All @@ -86,6 +90,68 @@ func NewInvariantInClusterDisruption(info monitortestframework.MonitorTestInitia
}
}

// parseAdminRESTConfigHost parses the adminRESTConfig.Host URL and returns hostname and port
func (i *InvariantInClusterDisruption) parseAdminRESTConfigHost() (hostname, port string, err error) {
parsedURL, err := url.Parse(i.adminRESTConfig.Host)
if err != nil {
return "", "", fmt.Errorf("failed to parse adminRESTConfig.Host %q: %v", i.adminRESTConfig.Host, err)
}

hostname = parsedURL.Hostname()
if hostname == "" {
return "", "", fmt.Errorf("no hostname found in adminRESTConfig.Host %q", i.adminRESTConfig.Host)
}

port = parsedURL.Port()
if port == "" {
port = "6443" // default port
}

return hostname, port, nil
}

// setKubernetesServiceEnvVars sets the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables
// based on the cluster type (ARO HCP, bare metal HyperShift, or standard)
func (i *InvariantInClusterDisruption) setKubernetesServiceEnvVars(envVars []corev1.EnvVar, apiIntHost, apiIntPort string) []corev1.EnvVar {
// Parse adminRESTConfig.Host once for bare metal HyperShift
var bareMetalHost, bareMetalPort string
var bareMetalErr error
if i.isHypershift && i.isBareMetalHypershift {
bareMetalHost, bareMetalPort, bareMetalErr = i.parseAdminRESTConfigHost()
if bareMetalErr != nil {
logrus.WithError(bareMetalErr).Errorf("Failed to parse adminRESTConfig.Host for bare metal HyperShift")
}
}

for j, env := range envVars {
switch env.Name {
case "KUBERNETES_SERVICE_HOST":
if i.isHypershift && i.isBareMetalHypershift {
if bareMetalErr != nil {
envVars[j].Value = apiIntHost
} else {
envVars[j].Value = bareMetalHost
}
} else {
envVars[j].Value = apiIntHost
}
case "KUBERNETES_SERVICE_PORT":
if i.isHypershift && i.isAROHCPCluster {
envVars[j].Value = "7443"
} else if i.isHypershift && i.isBareMetalHypershift {
if bareMetalErr != nil {
envVars[j].Value = apiIntPort
} else {
envVars[j].Value = bareMetalPort
}
} else {
envVars[j].Value = apiIntPort
}
}
}
return envVars
}

func (i *InvariantInClusterDisruption) createDeploymentAndWaitToRollout(ctx context.Context, deploymentObj *appsv1.Deployment) error {
deploymentID := uuid.New().String()
deploymentObj = disruptionlibrary.UpdateDeploymentENVs(deploymentObj, deploymentID, "")
Expand Down Expand Up @@ -114,15 +180,17 @@ func (i *InvariantInClusterDisruption) createDeploymentAndWaitToRollout(ctx cont
return nil
}

func (i *InvariantInClusterDisruption) createInternalLBDeployment(ctx context.Context, apiIntHost string) error {
func (i *InvariantInClusterDisruption) createInternalLBDeployment(ctx context.Context, apiIntHost, apiIntPort string) error {
deploymentObj := resourceread.ReadDeploymentV1OrDie(internalLBDeploymentYaml)
deploymentObj.SetNamespace(i.namespaceName)
deploymentObj.Spec.Template.Spec.Containers[0].Env[0].Value = apiIntHost
// set amount of deployment replicas to make sure it runs on all nodes
deploymentObj.Spec.Replicas = &i.replicas
// we need to use the openshift-tests image of the destination during an upgrade.
deploymentObj.Spec.Template.Spec.Containers[0].Image = i.openshiftTestsImagePullSpec

// Set the correct host and port for internal API server based on cluster type
deploymentObj.Spec.Template.Spec.Containers[0].Env = i.setKubernetesServiceEnvVars(
deploymentObj.Spec.Template.Spec.Containers[0].Env, apiIntHost, apiIntPort)
err := i.createDeploymentAndWaitToRollout(ctx, deploymentObj)
if err != nil {
return err
Expand Down Expand Up @@ -317,23 +385,39 @@ func (i *InvariantInClusterDisruption) StartCollection(ctx context.Context, admi
i.payloadImagePullSpec = clusterVersion.Status.History[0].Image
}

// Check for ARO HCP and skip if detected
// Check for Hypershift and ARO HCP
oc := exutil.NewCLI("apiserver-incluster-availability").AsAdmin()
var isAROHCPcluster bool
isHypershift, _ := exutil.IsHypershift(ctx, oc.AdminConfigClient())
if isHypershift {
i.isHypershift, _ = exutil.IsHypershift(ctx, oc.AdminConfigClient())
if i.isHypershift {
_, hcpNamespace, err := exutil.GetHypershiftManagementClusterConfigAndNamespace()
if err != nil {
logrus.WithError(err).Error("failed to get hypershift management cluster config and namespace")
}

// For Hypershift, only skip if it's specifically ARO HCP
// Use management cluster client to check the control-plane-operator deployment
managementOC := exutil.NewHypershiftManagementCLI(hcpNamespace)
if isAROHCPcluster, err = exutil.IsAroHCP(ctx, hcpNamespace, managementOC.AdminKubeClient()); err != nil {
i.isAROHCPCluster, err = exutil.IsAroHCP(ctx, hcpNamespace, managementOC.AdminKubeClient())
if err != nil {
logrus.WithError(err).Warning("Failed to check if ARO HCP, assuming it's not")
} else if isAROHCPcluster {
i.notSupportedReason = "platform Hypershift - ARO HCP not supported"
i.isAROHCPCluster = false // Assume not ARO HCP on error
}

// Check if this is a bare metal HyperShift cluster
i.isBareMetalHypershift, err = exutil.IsBareMetalHyperShiftCluster(ctx, managementOC)
if err != nil {
logrus.WithError(err).Warning("Failed to check if bare metal HyperShift, assuming it's not")
i.isBareMetalHypershift = false // Assume not bare metal HyperShift on error
}
}

if len(i.payloadImagePullSpec) == 0 {
i.payloadImagePullSpec, err = extensions.DetermineReleasePayloadImage()
if err != nil {
return err
}

if len(i.payloadImagePullSpec) == 0 {
log.Info("unable to determine payloadImagePullSpec")
i.notSupportedReason = "no image pull spec specified."
return nil
}
}
Expand Down Expand Up @@ -379,11 +463,25 @@ func (i *InvariantInClusterDisruption) StartCollection(ctx context.Context, admi
return fmt.Errorf("error getting openshift infrastructure: %v", err)
}

internalAPI, err := url.Parse(infra.Status.APIServerInternalURL)
if err != nil {
return fmt.Errorf("error parsing api int url: %v", err)
var apiIntHost string
var apiIntPort string
if i.isHypershift {
apiIntHost, apiIntPort, err = i.parseAdminRESTConfigHost()
if err != nil {
return fmt.Errorf("failed to parse adminRESTConfig.Host: %v", err)
}
} else {
internalAPI, err := url.Parse(infra.Status.APIServerInternalURL)
if err != nil {
return fmt.Errorf("error parsing api int url: %v", err)
}
apiIntHost = internalAPI.Hostname()
if internalAPI.Port() != "" {
apiIntPort = internalAPI.Port()
} else {
apiIntPort = "6443" // default port
}
}
apiIntHost := internalAPI.Hostname()

allNodes, err := i.kubeClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{})
if err != nil {
Expand Down Expand Up @@ -439,7 +537,7 @@ func (i *InvariantInClusterDisruption) StartCollection(ctx context.Context, admi
if err != nil {
return fmt.Errorf("error creating localhost: %v", err)
}
err = i.createInternalLBDeployment(ctx, apiIntHost)
err = i.createInternalLBDeployment(ctx, apiIntHost, apiIntPort)
if err != nil {
return fmt.Errorf("error creating internal LB: %v", err)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/test/extensions/binary.go
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ func ExtractAllTestBinaries(ctx context.Context, parallelism int) (func(), TestB
return nil, nil, errors.New("parallelism must be greater than zero")
}

releaseImage, err := determineReleasePayloadImage()
releaseImage, err := DetermineReleasePayloadImage()
if err != nil {
return nil, nil, errors.WithMessage(err, "couldn't determine release image")
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/test/extensions/util.go
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ func pullSpecToDirName(input string) string {
return filepath.Clean(safeName)
}

func determineReleasePayloadImage() (string, error) {
func DetermineReleasePayloadImage() (string, error) {
var releaseImage string

// Highest priority override is EXTENSIONS_PAYLOAD_OVERRIDE
Expand Down
Loading