Optimise codegen for constant segments in bit arrays on JS #3724
Replies: 3 comments 1 reply
-
Is there much value in this? Seeing as constants are only evaluated once. I'd be surprised if the different was measurable let along impactful. |
Beta Was this translation helpful? Give feedback.
-
1. Applies to all constant segments in bit arraysYes for top-level constants which are evaluated once the performance difference is not worth it. However, this proposal applies to any constant segment that's part of a bit array construction, i.e. it's not limited to top-level constants. I didn't give a specific example of this, but code such as pub fn go(s: String) { <<s:utf8, 0.0:64-float, 0x123456789ABC:48>> } also benefits because the two constant segments get compile-time evaluated. The above function is ~2x faster with this proposal, and ~3x faster when combined with #3725. A real-world benchmark I have is a large lookup table of 24,000 16-bit integers. It evaluates ~21x faster with this proposal when combined with #3725, but given that it's a top-level constant that performance gain is not important. However, its generated JS code is 185KiB (52%) smaller post 2. Fixes arbitrarily-sized constant integers in bit arraysThis proposal can fix arbitrarily-sized constant integers in bit array expressions on JS, if desired. These are currently subject to JS' pub fn main() {
// This is currently False on JS, but would change to True
<<0xFFFFFFFFFFFFFFFF:64>> == <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>>
let i = 0xFFFFFFFFFFFFFFFF
// This remains False on JS, as the use of `i` triggers the limitations of a JS number,
// unless we determined that `i` is a compile-time constant and evaluated it
<<i:64>> == <<0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF>>
} If it's the case that Gleam makes no guarantees about what happens when the limits of the target's integer type are hit, then this change could be made. Is existing code permitted to rely on such behaviour on JS? I can't see any reason to do so, as you'd get a different bit array on each target. The fact the result is different when pulling out If the behaviour can't change here, then limiting compile-time evaluation to integers <= 48 bits wide would maintain all current behavioural edge cases. Most would fit under this limit. |
Beta Was this translation helpful? Give feedback.
-
Implemented in #3724 |
Beta Was this translation helpful? Give feedback.
-
Currently when targeting JS the compiler converts
into this:
However, because all inputs are constant this could instead be evaluated at compile time to give more optimal output:
This is much faster to evaluate and also reduces JS code size, albeit only in the case when everything is constant. In my case this change would be quite significant on some large lookup tables.
This change would also have a secondary effect of adding support for arbitrarily sized constant integers in bit array expressions on JS, as the compiler would parse the input and convert to byte representation, i.e.
would compile to
Outstanding questions are:
num-bigint
crate or similar would need to be added as a dependency tocompiler-core
. Is this ok?Note that constant strings in bit arrays can also be compile time evaluated, but the raw bytes would be larger than the quoted string, so the cost/benefit isn't as clear.
Beta Was this translation helpful? Give feedback.
All reactions