Conversation

Prompt engineering made sense in the early days of diffusion models where you knew the training set and could weight certain styles more heavily. But I’m finding it increasingly less important with LLMs. Instead, dumping the system prompt and knowing what tools it has available is way more useful.

1
1
0
@singe I also wonder - considering the lack of information about the target system and its non-deterministic nature - at what point turns "engineering" into "messing around based on gut feeling"
1
0
1

@buherator @singe Absolutely, and that messing around is short lived based on how quickly some of this is changing. Whereas knowing to instruct it to use python for number problems rather than letting an LLM hallucinate is pretty important.

0
0
1