How do prompt injection attacks affect the safety and security of large language models (LLMs)? Discuss the potential risks these attacks pose to AI systems and user data. Explain various defense mechanisms that can be implemented to mitigate these risks, including examples of different types of prompt injection attacks and their potential impacts. Additionally, evaluate the effectiveness and limitations of these defense strategies, providing practical insights and considerations for their implementation.