A lightweight tool that splits the supplied text into multiple chunks based on a customizable token limit.
Tokens are estimated via GPT Tokenizer.
Choose a predefined option or enter a custom header (use \n for newlines). Chunk headers are not counted towards any configured token limit.
Per-chunk soft token limit with a corresponding token flex margin to allow even distribution across minimal chunks. Each chunk can hold up to limit + margin maximum tokens.
Splits into the fewest chunks possible while keeping each chunk under token limit + margin.