When working with GitHub SpecKit, does maintaining a large number of spec files negatively affect context? #1336
Replies: 4 comments 3 replies
-
|
Instead of "Archive older specs", why not just delete them since you can always see the git history? Ideally, AI could optimize specs, such as:
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
I don't quite understand why you need to retain any specs at all after shipping code. AI is perfectly capable of reading code and understanding what it does. |
Beta Was this translation helpful? Give feedback.
-
|
However, you probably do want to create and maintain updated codebase registry: a list of all modules, their interfaces, contracts and, ideally, high level logic, so that AI could read from it instead of reading all files completely every time. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
In a long-running project where each feature or change request generates its own spec, the repository may accumulate dozens or even hundreds of specs. At that point, how do teams ensure that developers (and AI tools) can still identify which specs are relevant and current?
If older specs describe outdated assumptions or superseded decisions, can they cause confusion or incorrect implementation when someone reads them later?
From a practical standpoint, is it recommended to:
Archive older specs?
Periodically purge obsolete specs?
Keep all specs but clearly mark them as deprecated or superseded?
Are there established best practices for organizing spec files at scale (for example, by lifecycle state, version, or feature area) so that context remains clear as the project grows?
In short, how should teams balance historical record vs. contextual clarity when the number of SpecKit specs becomes large, and what lifecycle management strategies are recommended?
Beta Was this translation helpful? Give feedback.
All reactions