Over the course of the year, it’s become increasin...
# ask-a-growth-question
v
Over the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of.
Over the course of the year, it’s become increasingly clear that writing code is one of the things LLMs are most capable of. If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English. It’s still astonishing to me how effective they are though. One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless. Except... you can run generated code to see if it’s correct. And with patterns like ChatGPT Code Interpreter the LLM can execute the code itself, process the error message, then rewrite it and keep trying until it works! So hallucination is a much lesser problem for code generation than for anything else. If only we had the equivalent of Code Interpreter for fact-checking natural language! How should we feel about this as software engineers? On the one hand, this feels like a threat: who needs a programmer if ChatGPT can write code for you? On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can.