Is “prompt engineering” even a thing? I vacillate between taking it rather seriously and chuckling at it as a stale insider joke. But the term has stuck, and so do discussions of its merit (for example: this HN thread).
I tried to come up with some kind of useful definition. This is the best I managed:
As a Prompt Engineer -
You have a lot of mileage with LLMs and AI systems in general (people who are exceptionally good at this have been reporting spending several hours daily working with AIs).
You already mastered a large number of useful tasks you can consistently and reliably complete using AI.
You continuously invent and discover novel ways to use AI and accomplish useful tasks.
You can use LLMs and other form of AI programmatically, by combining LLM calls as part of a larger and more complex process (ideally by writing code, though some people do that well using no-code tools or even just careful manual execution).
You can methodically examine and evaluate AI tasks, for example by developing evals and running them and analysing their results programmatically.
You keep up-to-date and consistently adapt to new developments, like new capabilities, models, libraries, etc ...
You can often come up with new ideas or translate existing requirements for tasks and processes that can be achieved better or more efficiently using AI.
That’s a start, but we really need a better definition. Any suggestions?