-
Notifications
You must be signed in to change notification settings - Fork 3.8k
optimize edit-tool rendering #463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize edit-tool rendering #463
Conversation
- Add parallelization to TUI rendering operations - Create generic concurrency helpers for improved performance - Refactor measurement utilities to support additional logging parameters - Update messages component to use simplified Measure function with message count logging 🤖 Generated with [opencode](https://opencode.ai) Co-Authored-By: opencode <[email protected]>
|
opening for review to show that i'm willing to converse over this |
|
I might be daft, but why are we even rendering ALL messages? Couldn't we just put together a viewport-based lazy renderer somehow? |
|
My change for doing stuff concurrently is not necessarily the best thing to do, but it's better than serial and a somewhat easy iteration. it's saving over 80% of rendering time on my test case. if you want to implement windowing ship it, that should be the best way. i just want to not wait for 10 extra seconds when i reduce my tmux split |
|
I will try to look for the specific thing that is actually slow. Maybe it's just syntax highlighting and then we can even cache the result of the ANSI output for that message |
|
i unplug pretty hard over the weekend these days (spousal orders) so it'll be monday before i can pull this down and test it out; want to play with it as part of review. excited to dig in, thanks for digging into this! |
yeah we should figure out virtualized scroll for sure |
adamdotdevin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pulled and reviewed, going to merge, this is a big improvement over current state of things
Co-authored-by: opencode <[email protected]> Co-authored-by: Adam <[email protected]>

in the future i think we should add opentelemetry for easier debugging and perf reporting.
but for now i added the util.Measure helper. Maybe it will only be used in the messages pane which is the heaviest one,
doing it one per message is VERY verbose. Doing it one per message part is not acceptable
I have concerns about spawning so many goroutines, I need to understand why stuff is slow.
Feels weird to not think about concurrency limits--for instance if the syntax highlight was using
bat,you wouldn't want to spawn hundreds of processes (or would you?) in parallel.
resolves #446