The relevant processes are so fast, mutexes and mutex locks are so
expensive, and iterators so efficient, that it's actually faster to run
single-threaded across all the data than to spin up a bunch of threads
and have them basically spinlock waiting for the global mutex involved
either directly or in a channel.
ivy_files(kubernetes) time: [10.209 ms 10.245 ms 10.286 ms]
change: [-36.781% -36.178% -35.601%] (p = 0.00 < 0.05)
Performance has improved.
ivy_match(file.lua) time: [1.1626 µs 1.1668 µs 1.1709 µs]
change: [+0.2131% +1.5409% +2.9109%] (p = 0.02 < 0.05)
Change within noise threshold.
- Use an async (i.e. unlimited buffer) MPSC channel instead of an
Arc<Mutex<Vec>> for storing the scored matches in Sorter
- Use Arc<Matcher> instead of Arc<Mutex<Matcher>> for the matcher, as
it's not mutated and appears to be threadsafe.
This cuts average iteration time (on the benchmarked machine) from
25.98ms to 16.08ms for the ivy_files benchmark.