头图

图片

GPTSecurity是一个涵盖了前沿学术研究和实践经验分享的社区,集成了生成预训练 Transformer(GPT)、人工智能生成内容(AIGC)以及大型语言模型(LLM)等安全领域应用的知识。在这里,您可以找到关于GPT/AIGC/LLM最新的研究论文、博客文章、实用的工具和预设指令(Prompts)。现为了更好的知悉近一周的贡献内容,现总结如下如下。

                           Security Papers

行业新闻

安全大模型进入爆发期!谷歌云已接入全线安全产品|RSAC 2023

https://mp.weixin.qq.com/s/5Aywrqk7B6YCiLRbojNCuQ

软件供应链安全

1)论文

Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models

https://arxiv.org/pdf/2212.14834.pdf

SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques

https://dl.acm.org/doi/abs/10.1145/3549035.3561184

LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluation

https://arxiv.org/pdf/2303.09384.pdf

DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection

https://arxiv.org/pdf/2304.00409.pdf

2)博客

我是如何用GPT自动化生成Nuclei的POC

https://mp.weixin.qq.com/s/Z8cTUItmbwuWbRTAU\_Y3pg

ChatGPT自身安全

1)论文

GPT-4 Technical Report

https://arxiv.org/abs/2303.08774 

Ignore Previous Prompt: Attack Techniques For Language Models 

https://arxiv.org/abs/2211.09527 

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

https://arxiv.org/abs/2302.12173

Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks 

https://arxiv.org/abs/2302.05733 

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

https://arxiv.org/abs/2009.11462

Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models

https://arxiv.org/abs/2102.02503

Taxonomy of Risks posed by Language Models

https://dl.acm.org/doi/10.1145/3531146.3533088 

Survey of Hallucination in Natural Language Generation

https://arxiv.org/abs/2202.03629 

Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned

https://arxiv.org/abs/2209.07858 

Pop Quiz! Can a Large Language Model Help With Reverse Engineering 

https://arxiv.org/abs/2202.01142

Evaluating Large Language Models Trained on Code

https://arxiv.org/abs/2107.03374 

Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in Code? 

https://arxiv.org/abs/2204.04741Using Large Language Models to Enhance Programming Error Messages 

https://arxiv.org/abs/2210.11630 

Controlling Large Language Models to Generate Secure and Vulnerable Code

https://arxiv.org/abs/2302.05319 

Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models 

https://arxiv.org/abs/2302.04012

SecurityEval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques 

https://dl.acm.org/doi/abs/10.1145/3549035.3561184 

Assessing the quality of GitHub copilot’s code generation 

https://dl.acm.org/doi/abs/10.1145/3558489.3559072 

Can we generate shellcodes via natural language? An empirical study 

https://link.springer.com/article/10.1007/s10515-022-00331-3

2)博客

用ChatGPT来生成编码器与配套WebShell 

https://mp.weixin.qq.com/s/I9IhkZZ3YrxblWIxWMXAWA 

使用ChatGPT来生成钓鱼邮件和钓鱼网站

https://www.richardosgood.com/posts/using-openai-chat-for-phi... 

Chatting Our Way Into Creating a Polymorphic Malware 

https://www.cyberark.com/resources/threat-research-blog/chatt... 

Hacking Humans with AI as a Service 

https://media.defcon.org/DEF%20CON%2029/DEF%20CON%2029%20pres... 

内建虚拟机实现ChatGPT的越狱

https://www.engraved.blog/building-a-virtual-machine-inside/ 

ChatGPT can boost your Threat Modeling skills 

https://infosecwriteups.com/chatgpt-can-boost-your-threat-mod...

干货分享!Langchain框架Prompt Injection在野0day漏洞分析\

https://mp.weixin.qq.com/s/wFJ8TPBiS74RzjeNk7lRsw

LLM中的安全隐患-以VirusTotal Code Insight中的提示注入为例

https://mp.weixin.qq.com/s/U2yPGOmzlvlF6WeNd7B7ww

威胁检测

ChatGPT赋能的威胁分析——使用ChatGPT为每个npm, PyPI包检查安全问题,包括信息渗透、SQL注入漏洞、凭证泄露、提权、后门、恶意安装、预设指令污染等威胁

https://socket.dev/blog/introducing-socket-ai-chatgpt-powered...

                      Security Tools

PentestGPT

简介:一款基于GPT的自动化渗透测试工具。它建立在 ChatGPT 之上,以交互方式运行,以指导渗透测试人员进行整体进度和具体操作。PentestGPT能够解决简单到中等的 HackTheBox 机器和其他 CTF 挑战。与Auto-GPT 的相比,它有以下几个差异点:

1)在安全测试中使用Auto-GPT很好,但它并未针对与安全相关的任务进行优化。

2)PentestGPT专为通过自定义会话交互进行渗透测试而设计。

3)目前,PentestGPT不依赖于搜索引擎。

链接:[https://github.com/GreyDGL/PentestGPT]

Audit GPT

简介:一款针对区块链安全审计任务进行GPT微调的智能化工具

链接:https://github.com/fuzzland/audit_gpt

IATelligence

简介:IATelligence 是一个 Python 脚本,它将提取 PE 文件的 IAT 并请求 GPT 以获取有关 API 和 ATT&CK 矩阵相关的更多信息

链接:https://github.com/fr0gger/IATelligence


云起无垠
12 声望16 粉丝

国内首创的Fuzzing 全流程赋能开发安全-DevSecOps 解决方案供应商, 基于智能模糊测试引擎为协议、数据库、API、APP、Web3.0 等场景提供强大的软件安全自动化分析能力,从源头助力企业自动化检测与修复业务系统安...