Traditional reinforcement learning suffers from challenges in replicating human-like behaviors, generalizing effectively in multi-agent scenarios, and overcoming inherent interpretability issues. These tasks become even more difficult when they require a deep understanding of the environment, coordination of individual agents' intentions and driving styles across various scenarios, and the optimization of safety, efficiency, and comfort in dynamic environments. Recently, Large Language Model (LLM) enhanced methods have shown promise in improving generalization and interoperability. However, these approaches primarily focus on single-agent scenarios and often neglect the coordination necessary among multiple road users. Therefore, in this paper, we introduce the Cascading Cooperative Multi-Agent (CCMA) framework, designed to address these challenges by enhancing human-like behaviors and fostering multi-level cooperation across diverse multi-agent driving tasks, ultimately improving both micro and macro-level efficiency in complex driving environments. Specifically, the CCMA framework integrates RL for individual interactions, a fine-tuned LLM for regional cooperation, a reward function for global optimization, and the Retrieval-Augmented Generation (RAG) mechanism to dynamically optimize decision-making across complex driving scenarios. Our experiments demonstrate that CCMA not only enhances human-like behavior and interpretability, but also outperforms traditional methods in multi-agent environments.