AI tools

From Legoktm

I originally wrote this on my work's internal wiki to share how I use AI/LLM tools, but I think it's broadly applicable and am publishing it publicly.

Windsurf plugin (formerly Codeium)

I have Windsurf installed as a VS Code plugin. It provides pretty smart autocomplete, mostly saving typing time. The better your code is, like having documentation, type hints, etc., you get better results (as a result, I find the Rust support is better than Python).

I primarily like it when I'm writing tests, because I can write the function name and a comment explaining what I want it to do, and about half the time it can write the entire test case for me, inferring from other test cases how to set it up and what to modify. I'm not sure what the knowledge cutoff is and I've seen it hallucinate a few times, but in general it can infer from the file context. Also hallucinations are pretty easy to spot because the compiler/linter will flag it pretty quickly.

There is also functionality where you select a block of code, send it a prompt, and it'll refactor it for you. I tend to only use this on smaller code blocks where I want a distinct thing done and know exactly what it is, but involves a lot of syntax refactoring/typing that is easier and faster for the LLM to do.

I'm currently using the free plan, and opted out of "telemetry", so they've disabled the interactive chat functionality. Their organization-level plan apparently provides better privacy protections, but it's been a while since I looked into it.

As far as privacy goes, it kind of sucks that I have to send all the code I'm working on to a remote service, but all our code is public anyways, so it's not a huge problem, just gives me the ick. For security fixes that are not publicly disclosed, I work on them in a separate folder (e.g. private-securedrop-client) where I have the Windsurf extension disabled.

Claude

I primarily use Claude's web chatbot interface for two purposes: interactive debugging/Q&A sessions and full script writing.

Here's an example of an interactive debugging session I had with Claude: https://gist.github.com/legoktm/f88d49e6411af5e72d4bd99d9bdcef55 while I was working on a bug in the focal->noble migration script: https://github.com/freedomofpress/securedrop/pull/7406#issuecomment-2596712579. I didn't use any of the code samples it gave me, rather relied on the knowledge of .process_group(0). I already knew about systemd's KillMode=process, which confirmed that I was on the right track.

Here's an example of full script writing: https://gist.github.com/legoktm/17ecb35c00b1b94c7863ea13630193b0, where I needed a script from scratch to post the kernel test farm results to GitHub. In this case it needed a lot of manual fixing because GitHub's antivirus/antispam protections would block our log files unless you uploaded it in a very specific way.

I'm still using the free plan, which is starting to get more annoying because they downgrade me to the less powerful Haiku model during peak traffic, which happens to be during my working hours :( I'm pretty ready to switch to a paid tier but haven't gotten around to it yet. Also I once tried to pay Anthropic to get API access and they said I wasn't a real person or whatever and rejected my credit card...

See also

I've been heavily influenced by and recommend Simon Willison's "Here’s how I use LLMs to help me write code".