Published In

Lmpl 2025 Proceedings of the 1st ACM SIGPLAN International Workshop on Language Models and Programming Languages Co Located with ICFP Splash 2025

Document Type

Article

Publication Date

10-9-2025

Abstract

Large language models (LLMs) can potentially help with verification using proof assistants by automating proofs. However, it is unclear how effective LLMs are in this task. In this paper, we perform a case study based on two mature Rocq projects: the hs-to-coq tool and Verdi. We evaluate the effectiveness of LLMs in generating proofs by both quantitative and qualitative analysis. Our study finds that: (1) external dependencies and context in the same source file can significantly help proof generation; (2) LLMs perform great on small proofs but can also generate large proofs; (3) LLMs perform differently on different verification projects; and (4) LLMs can generate concise and smart proofs, apply classical techniques to new definitions, but can also make odd mistakes.

Rights

Copyright (c) 2025 The Authors

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

10.1145/3759425.3763391

Persistent Identifier

https://archives.pdx.edu/ds/psu/44315

Share

COinS