Skip to content

Conversation

@Schniz
Copy link
Contributor

@Schniz Schniz commented Jun 27, 2025

  • Add parallelization to TUI message rendering operations
  • Create generic concurrency helpers for improved performance
  • Update messages component to use simplified Measure function with message count logging

in the future i think we should add opentelemetry for easier debugging and perf reporting.
but for now i added the util.Measure helper. Maybe it will only be used in the messages pane which is the heaviest one,
doing it one per message is VERY verbose. Doing it one per message part is not acceptable

I have concerns about spawning so many goroutines, I need to understand why stuff is slow.
Feels weird to not think about concurrency limits--for instance if the syntax highlight was using bat,
you wouldn't want to spawn hundreds of processes (or would you?) in parallel.

resolves #446

- Add parallelization to TUI rendering operations
- Create generic concurrency helpers for improved performance
- Refactor measurement utilities to support additional logging parameters
- Update messages component to use simplified Measure function with message count logging

🤖 Generated with [opencode](https://opencode.ai)

Co-Authored-By: opencode <[email protected]>
@Schniz Schniz marked this pull request as ready for review June 27, 2025 14:25
@Schniz
Copy link
Contributor Author

Schniz commented Jun 27, 2025

opening for review to show that i'm willing to converse over this :relieved-telegram:

@thdxr thdxr requested a review from adamdotdevin June 27, 2025 18:11
@xHeaven
Copy link
Contributor

xHeaven commented Jun 27, 2025

I might be daft, but why are we even rendering ALL messages? Couldn't we just put together a viewport-based lazy renderer somehow?

@Schniz
Copy link
Contributor Author

Schniz commented Jun 28, 2025

My change for doing stuff concurrently is not necessarily the best thing to do, but it's better than serial and a somewhat easy iteration. it's saving over 80% of rendering time on my test case.

if you want to implement windowing ship it, that should be the best way. i just want to not wait for 10 extra seconds when i reduce my tmux split

@Schniz
Copy link
Contributor Author

Schniz commented Jun 28, 2025

I will try to look for the specific thing that is actually slow. Maybe it's just syntax highlighting and then we can even cache the result of the ANSI output for that message

@adamdotdevin
Copy link
Contributor

i unplug pretty hard over the weekend these days (spousal orders) so it'll be monday before i can pull this down and test it out; want to play with it as part of review. excited to dig in, thanks for digging into this!

@adamdotdevin
Copy link
Contributor

I might be daft, but why are we even rendering ALL messages? Couldn't we just put together a viewport-based lazy renderer somehow?

yeah we should figure out virtualized scroll for sure

Copy link
Contributor

@adamdotdevin adamdotdevin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pulled and reviewed, going to merge, this is a big improvement over current state of things

@adamdotdevin adamdotdevin merged commit f618e56 into sst:dev Jun 28, 2025
achembarpu pushed a commit to achembarpu/opencode that referenced this pull request Aug 4, 2025
Co-authored-by: opencode <[email protected]>
Co-authored-by: Adam <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Resizing takes long when sessions are large

3 participants