Mohammad Ghafari (@ghafarii) 's Twitter Profile
Mohammad Ghafari

@ghafarii

TU Clausthal

ID: 614075300

calendar_today21-06-2012 06:27:28

283 Tweet

152 Followers

174 Following

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

You can use NASRA to define your #static_program_analysis task in Natural Language (1) and generate its corresponding #CodeQL query (2). For more details check our new IEEE SANER Tool paper's preprint: arxiv.org/abs/2301.04862 CC GitHub Security Lab

You can use NASRA to define your #static_program_analysis task in Natural Language (1) and generate its corresponding #CodeQL query (2). For more details check our new <a href="/SANERconf/">IEEE SANER</a> Tool paper's preprint: arxiv.org/abs/2301.04862
CC <a href="/GHSecurityLab/">GitHub Security Lab</a>
Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

OT security is alarming: we reported 53 #CVEs, some from minor errors but with major impact. Read more in our IEEE/ACM SafeThings 2023 paper "Insecure by Design in the Backbone of Critical Infrastructure". Preprint: arxiv.org/abs/2303.12340

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

What if the hints from security analysis tools guide LLMs in writing secure code? We've developed #LLMSecGuard, an open-source framework that aims to facilitate secure code generation. EASE 2025 SECUTE 2024 Preprint: arxiv.org/abs/2405.01103

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

I must pay a fee of above Є1,100 for the publication of our article in #Elsevier. The cost would have been more if #DFG funded the project. This is a #Germany-wide deal contract, which I don't understand!

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

Today at EASE 2025, Mark Harman gave a nice keynote, introducing their work for "Assured LLM-based SE" at Meta. If you liked the idea, we've also developed a similar framework "LLMSecGuard" to enhance code security. dl.acm.org/doi/10.1145/36…

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

Can ChatGPT replace your static cryptography misuse detector? We investigated this in our recent work accepted for the industry track ESEM Conference 2024. Preprint: arxiv.org/abs/2409.06561

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

Deep learning models can be poisoned. We've developed an open-source solution to protect code models against poisoning attacks. Check out the preprint of our recent Journal of Systems and Software article: arxiv.org/abs/2502.13459

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

We benchmarked GPT models for secure code generation and distilled our lessons learned into an open-source tool. Details are available in the preprint of our FORGE 2025 paper: arxiv.org/abs/2502.06039

Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

On the first day of ICSE week, Marc and Fabio presented our FORGE work on benchmarking GPT models for secure code generation. In short, the choice of prompts has a significant impact on the results.

On the first day of <a href="/ICSEconf/">ICSE</a> week, Marc and Fabio presented our <a href="/ConfForge/">FORGE</a> work on benchmarking GPT models for secure code generation. In short, the choice of prompts has a significant impact on the results.
Mohammad Ghafari (@ghafarii) 's Twitter Profile Photo

Our recent article is available in the "Computers and Security" journal. I strongly recommend it if you're interested in #Metaverse technology, its #security and #privacy risks that are receiving attention, and open gaps that warrant investigation. sciencedirect.com/science/articl…