-
Notifications
You must be signed in to change notification settings - Fork 14
Description
There's a few failures across the suite for Ubuntu here, so I'll break them down by type for ease.
Testing IOPS
We consistently see failures in TestSequentialReadIOPS
, TestSequentialWriteIOPS
, TestRandomReadIOPS
, across the combinations of machine-type [c3d-standard-180
, c4-standard-192
] and disk-type [pd-balanced
, hyperdisk-balanced
, hyperdisk-extreme
].
The errors are that the ...iops average was too low for vm <$combination>: expected at least 0.850000 of target <$target>, got <$result>
.
I've noticed that for these specific machine-type vs. disk-type combinations it consistently fails, but the percentage of expected vs. actual doesn't tend to shift very much (implying to me at least that the method of testing is sound, but the expected values may be set too high).
The randReadIOPS
, randWriteIOPS
, seqReadBW
and seqWriteBW
values are set in storageperf/storage_perf_utils.go
and haven't changed from when the repo was first migrated from guest-test-infra
back in May last year. In the README.md
it doesn't say that where the values were sourced from, but that "...in a future change, this will be compared to a documented IOPS value".
This could very well be an oversight and the values do indeed need updating to be "less strict", but I'm also curious as to whether you are seeing the same or similar in the debian images? Because of course if the debian images are passing the tests across all the disk-type/machine-type combinations we clearly have some work to do on the Ubuntu side 😄
getCPUNvmeMapping
returning an error
This one I'm struggling to narrow down exactly where the issue is, so I'll need to lean on your expertise. From the logs:
TestRandomReadIOPS" fails with: "storage_perf_utils.go:544: failed to get hyperdisk additional options:
error could not get cpu to nvme queue mapping: err exit status 1
but I can't replicate where the err
is thrown via getCPUNvmeMapping
(executing exec.Command("cat", "/sys/class/block/"+symlinkRealPath+"/mq/*/cpu_list"
) in getHyperdiskAdditionalOptions
?