y than the solution presented above, however, its throughput can easily be over an order of magnitude faster. This is a good general purpose trade off to make. You rarely lose, but often gain big. **NOTE:** The name `memchr` comes from the corresponding routine in `libc`. A key advantage of using this library is that its performance is not tied to its quality of implementation in the `libc` you happen to be using, which can vary greatly from platform to platform. But what about substring search? This one is a bit more complicated. The primary reason for its existence is still indeed performance, but it's also useful because Rust's core library doesn't actually expose any substring search routine on arbitrary bytes. The only substring search routine that exists works exclusively on valid UTF-8. So if you have valid UTF-8, is there a reason to use this over the standard library substring search routine? Yes. This routine is faster on almost every metric, including latency. The natural question then, is why isn't this implementation in the standard library, even if only for searching on UTF-8? The reason is that the implementation details for using SIMD in the standard library haven't quite been worked out yet. **NOTE:** Currently, only `x86_64`, `wasm32` and `aarch64` targets have vector accelerated implementations of `memchr` (and friends) and `memmem`. # Crate features * **std** - When enabled (the default), this will permit features specific to the standard library. Currently, the only thing used from the standard library is runtime SIMD CPU feature detection. This means that this feature must be enabled to get AVX2 accelerated routines on `x86_64` targets without enabling the `avx2` feature at compile time, for example. When `std` is not enabled, this crate will still attempt to use SSE2 accelerated routines on `x86_64`. It will also use AVX2 accelerated routines when the `avx2` feature is enabled at compile time. In general, enable this feature if you can. * **alloc** - When enabled (the default), APIs in this crate requiring some kind of allocation will become available. For example, the [`memmem::Finder::into_owned`](crate::memmem::Finder::into_owned) API and the [`arch::all::shiftor`](crate::arch::all::shiftor) substring search implementation. Otherwise, this crate is designed from the ground up to be usable in core-only contexts, so the `alloc` feature doesn't add much currently. Notably, disabling `std` but enabling `alloc` will **not** result in the use of AVX2 on `x86_64` targets unless the `avx2` feature is enabled at compile time. (With `std` enabled, AVX2 can be used even without the `avx2` feature enabled at compile time by way of runtime CPU feature detection.) * **logging** - When enabled (disabled by default), the `log` crate is used to emit log messages about what kinds of `memchr` and `memmem` algorithms are used. Namely, both `memchr` and `memmem` have a number of different implementation choices depending on the target and CPU, and the log messages can help show what specific implementations are being used. Generally, this is useful for debugging performance issues. * **libc** - **DEPRECATED**. Previously, this enabled the use of the target's `memchr` function from whatever `libc` was linked into the program. This feature is now a no-op because this crate's implementation of `memchr` should now be sufficiently fast on a number of platforms that `libc` should no longer be needed. (This feature is somewhat of a holdover from this crate's origins. Originally, this crate was literally just a safe wrapper function around the `memchr` function from `libc`.)