The Chat Model Prompter takes a statement (prompt) and the conversation history with human and AI messages. It then generates a response for the prompt with the knowledge of the previous conversation.
If you want to reduce the amount of tokens being used, consider reducing the conversation table length to a reasonable (e.g. 5 conversation steps) length before feeding it into the Chat Model Prompter.