The relevant processes are so fast, mutexes and mutex locks are so
expensive, and iterators so efficient, that it's actually faster to run
single-threaded across all the data than to spin up a bunch of threads
and have them basically spinlock waiting for the global mutex involved
either directly or in a channel.
ivy_files(kubernetes) time: [10.209 ms 10.245 ms 10.286 ms]
change: [-36.781% -36.178% -35.601%] (p = 0.00 < 0.05)
Performance has improved.
ivy_match(file.lua) time: [1.1626 µs 1.1668 µs 1.1709 µs]
change: [+0.2131% +1.5409% +2.9109%] (p = 0.02 < 0.05)
Change within noise threshold.
39 lines
823 B
Rust
39 lines
823 B
Rust
use super::matcher;
|
|
use super::thread_pool;
|
|
|
|
use std::sync::mpsc;
|
|
use std::sync::Arc;
|
|
|
|
pub struct Match {
|
|
pub score: i64,
|
|
pub content: String,
|
|
}
|
|
|
|
pub struct Options {
|
|
pub pattern: String,
|
|
pub minimun_score: i64,
|
|
}
|
|
|
|
impl Options {
|
|
pub fn new(pattern: String) -> Self {
|
|
Self {
|
|
pattern,
|
|
minimun_score: 20,
|
|
}
|
|
}
|
|
}
|
|
|
|
pub fn sort_strings(options: Options, strings: Vec<String>) -> Vec<Match> {
|
|
let matcher = matcher::Matcher::new(options.pattern);
|
|
|
|
let mut matches = strings
|
|
.into_iter()
|
|
.map(|candidate| Match {
|
|
score: matcher.score(candidate.as_str()),
|
|
content: candidate,
|
|
})
|
|
.filter(|m| m.score > 25)
|
|
.collect::<Vec<Match>>();
|
|
matches.sort_by(|a, b| a.score.cmp(&b.score));
|
|
matches
|
|
}
|