Readers read text streams and create text blocks to process.
Tokenizers read text blocks and divide them up into individual tokens to be processed.
Splitters split up tokens into multiple smaller tokens as an alternative interpretation.
CharacterSplitter - Tokens are split when any in a specified set of characters (typically punctuation) is encountered.
Normalizers create alternative normalized interpretations of tokens from original tokens.
Recognizers identify and flag tokens based on their character patterns.