Generative Artificial Intelligence (AI) tools and Large Language Models (LLMs) were integrated into all stages of the Software Development Life Cycle (SDLC), influencing requirements gathering and analysis, design and planning, development, testing, deployment, maintenance and support, and documentation. Meanwhile, software performance testing is an essential type of software testing that ensures stable software behavior under various scenarios. This paper aims to point out the most common approaches in integrating Generative AI across the stages of software performance testing and in related software testing activities, as well as the main challenges and limitations faced by researchers. This research followed the systematic literature review approach to analyze and synthesize the existing studies on the application of Generative AI in software performance testing. Within the review, eight papers published between 2024 and 2025 in research literature databases, including ScienceDirect, SpringerLink, IEEE Xplore, and Google Scholar, were selected and analyzed. The review results reveal that the application of Generative AI is mainly concentrated in functional testing, specifically in test case generation, with a limited adoption in test scenario generation and capturing non-functional requirements. Key challenges identified include the inconsistency of generated output and hallucinations of LLMs. The findings indicate a significant research gap in applying Generative AI in the process of software performance testing.