Prompt engineering made sense in the early days of diffusion models where you knew the training set and could weight certain styles more heavily. But I’m finding it increasingly less important with LLMs. Instead, dumping the system prompt and knowing what tools it has available is way more useful.
@buherator @singe Absolutely, and that messing around is short lived based on how quickly some of this is changing. Whereas knowing to instruct it to use python for number problems rather than letting an LLM hallucinate is pretty important.