-
Notifications
You must be signed in to change notification settings - Fork 130
Description
My thought is, that it should speed up the process a lot if the renames were done in parallel. Especially if the user has enough OpenAI quota, it could be much faster to process large files by parallelising the work.
Local inference should also be able to be run in parallel, if the user has good enough GPU at hand.
One big problems is, that I've gotten the best results when applying renames from the bottom up – so say we have:
function a() {
const b = (c) => {
return c * c
}
}
It seems that running the rename in order a -> b -> c
yields much better results than running c -> b -> a
.
But if we'd have multiple same-level identifiers like:
function a() {
function b() {}
function c() {}
function d() {}
}
At least in theory it would be possible to run a
first and [b, c, d]
in parallel to get feasible results.
In the best case scenario there would be a second LLM step to check that all variables still make sense after the parallel run has finished.