+
Skip to content

CRC 2.53.0 fails to expose ports on EC2 instances - vsock networking incompatible with cloud VMs #4909

@joshuamccluskey

Description

@joshuamccluskey

General information

Description:
CRC 2.53.0 fails to expose the OpenShift console and API ports (443, 6443) when running on an EC2 instance. The daemon starts but doesn't bind to any ports, making remote access impossible.

Analysis:
The vsock networking mode appears incompatible with virtualized cloud environments like EC2. The daemon expects vsock interfaces that don't exist in nested virtualization scenarios.

Workaround Attempts (Failed):
Setting network-mode to vsock explicitly
Using socat to bridge vsock to TCP
Starting daemon multiple times
Using HAProxy as documented for remote access (fails because CRC IP is 127.0.0.1)

Suggested Solutions:
Detect cloud/virtualized environments and fall back to traditional networking
Provide a network-mode: bridge or network-mode: tcp option for cloud deployments
Document that vsock mode is incompatible with cloud instances
Provide clear error messages when vsock binding fails

Operating System

Linux

Hypervisor

KVM

Did you run crc setup before crc start?

yes

Running on

VM

Steps to reproduce

Launch EC2 instance with RHEL 10
Install CRC 2.53.0
Run crc setup
Run crc start
Check crc ip - returns 127.0.0.1
Start daemon with crc daemon
Check for listening ports: sudo netstat -tlnp | grep -E "443|6443"

CRC version

CRC Version: 2.53.0+a6f712
OpenShift Version: 4.19.3
Hypervisor: KVM/libvirt

CRC status

crc status
CRC VM:          Running
OpenShift:       Unreachable (v4.19.3)
Disk Usage:      0B of 0B (Inside the CRC VM)
Cache Usage:     29.84GB
Cache Directory: /home/ec2-user/.crc/cache

CRC config

crc config view
- consent-telemetry                     : yes

Host Operating System

Host OS: Red Hat Enterprise Linux 10.0 (EC2 instance)
Instance Type: EC2 with 188GB RAM, 400GB disk

Expected behavior

CRC should expose ports 443 and 6443 on the host
crc ip should return the actual VM IP address
Remote access should be possible via SSH tunneling or direct connection

Actual behavior

crc start completes successfully
crc status shows OpenShift as "Running"
crc ip returns 127.0.0.1 instead of the expected VM IP (e.g., 192.168.130.11)
crc daemon reports "daemon has been started in the background" but doesn't bind ports
No ports are listening on 443 or 6443 (netstat -tlnp | grep -E "443|6443" returns empty)
The libvirt network "crc" doesn't exist (virsh net-list --all doesn't show it)

CRC Logs

time="2025-09-05T00:58:12Z" level=debug msg="Using address: api.crc.testing:6443"
time="2025-09-05T00:58:12Z" level=debug msg="Dialing to 127.0.0.1:6443"
time="2025-09-05T00:58:12Z" level=debug msg="Using address: oauth-openshift.apps-crc.testing:443"
time="2025-09-05T00:58:12Z" level=debug msg="Dialing to 127.0.0.1:443"

Additional context

This breaks the primary use case of running CRC on cloud instances for remote development teams, which was previously possible with older CRC versions.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载