Developing Security Architecture for AI Agents in Real-World Software Development

Developing Security Architecture for AI Agents in Real-World Software Development
Project ID: 2526Eng1001
Research Mentor: Professor Chen Ho
Contact Person: Professor Chen Ho

Abstract:

AI agents are widely adopted in software development, significantly enhancing productivity and efficiency. Millions of developers rely on AI agentic programming tools — such as GitHub Copilot, Cursor, Windsurf, Trae and various LLM-based coding assistants — to generate code snippets, complete functions, and even implement entire features. However, we do not understand the security implications of these agents and have not explored the potentials of these agents in intelligent security testing.

This research develops an architecture for AI agents in real-world software development, including secure code generation, agent security protocol, and automated security testing. It aims to answer the following research questions: Is the code generated by AI agents secure? Can we use AI agents to test the security of software systems? Students will have the opportunity to learn real-world software development, to work with AI agent, and develop a security architecture for AI agents.

Skills and experience required for the project:

– Solid background in computer science

– Competent in at least one programming language, such as Python, Rust, Java, C/C++, JavaScript.

– Passion in conducting influential research in AI and software security

– Sustained time commitment

Apply Now

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.