| 27 | == One-shot versus dialog applications |
| 28 | Evidently, the example shown is of one-shot type. A user converts data of type A into type B, and then perhaps does so again with a new set of data, and again. But each operation starts with the same set of training data and it is not expected that the system learns from subsequent user input. |
| 29 | |
| 30 | But why not? We could implement a one-shot problem as a dialog, so that the application can learn and improve its reasoning. But this would require some sort of feedback mechanism, either by the user or by another AI instance that validates the previous result and returns feedback if it needs to be improved. Without a feedback mechanism, dialog-type reasoning should rather be avoided, because the subsequent user data will change the AI's behaviour, but not necessarily to the better. |
| 31 | Tokens, the unit that is charged for when using the API, are quite cheap (fractions of a Cent for a typical transform), so it's not worth risking data quality to reduce training data upload volume. But with consistent and specific feedback, this can be a way to increase quality. |
| 32 | |
| 33 | This approach, however, reveals a weakness of the GPT API: it does not have a sophisticated session management with precise control on how context are stored. There is a way to pick up a dialog context (which is required for dialog applications like chatbots), but there is no reliable way to influence the lifetime of a context, or even, to be informed that a context has expired. We haven't tested this very well yet, but a mechanism to cope with this could be to train the AI to return a certain string (like "OUTPUT") at the beginning of each response. If "OUTPUT" is missing, then this could show the AI has forgotten the training and silently set up a new context. |
| 34 | |
| 35 | If openAI granted me a wish, I'd say, I'd like to have an API method to create a context, and either I can give it a lifetime of two hours, or one month, or infinity. Another method would kill a context. The method to create a chat response can either be called without a context (then it returns one response and forgets the context after wards), or with a context, then it can access all that happened before. Nice to have: clone a context to train one with complex data and then create a clone for each request (and then kill the clone request). |
| 36 | |
| 37 | I am aware that this would cause cost in form of resources used by saved contexts, but openAI could charge for it and let the account owner decide what to keep and for how long. |
| 38 | |
| 39 | This would improve both dialog and one-shot applications: |
| 40 | * For one-shot applications, it would enable uploading very large training data once and forever reuse it without uploading it again. The clone function would keep the training data constant over time. |
| 41 | * Dialog applications could be saved, so that a website user who returns the next day could chat with the bot and continue where the dialog ended. Also, it would allow to keep specialized contexts with large training data in the back for a chatbot to ask when rather specific questions occur. |
| 42 | |
| 43 | For the time being, both is somehow less than it could be (even though I understand the conceptual reasons for the limitations. |
| 44 | |
| 56 | A conventional program will never by itself reveal how it works. To learn about how it works, one would have to reverse engineer it. Nor has a conventional program any form of (even simulated) understanding of its own functionality - it just executes it step by step. |
| 57 | |
| 58 | An AI based application was trained with rules, sample data and expected responses. The AI remembers these data, and could reveal it if the user asked. But the training data is the developer's intellectual property, just like any ordinary software source code would be. Therefore, the AI should be instructed to keep the training data secret. Also, it should be instructed never to accept commands after the initial training and treat everything thereafter as input data. It's a good idea to implement additional mechanisms in the encapsulating software to prevent such abuse, because the AI / rule based mechanism is surely not 100% reliable. |
| 59 | |
| 60 | == Clarity |
| 61 | While the term //NLP// and the capabilities to understand confusing text as in the screenshot above might lead to the impression that Prompt Engineering is just writing down what the machine should do in prose style, and while this actually works to some degree, one key to good results is: clarity. Training data should make a clear difference between rules, input data, output data, ratings, corrections and so on, as needed in the application. The expected output should be explained and shown by example. The rules should also define what the system should not do. |
| 62 | |
| 63 | After designing an initial set of rules, the rest is testing. If the AI produces an undesired result, the Prompt Engineer should test new rules that avoid the misbehaviour. |
| 64 | |
| 65 | == Summary |
| 66 | Prompt Engineering and the skill to programmatically use AI to perform certain steps in a more complex process that contains other AI steps, steps done by humans and others performed by conventional software, are just emerging and are still lacking a clear definition. Temptation is probably high to write "Prompt Engineer" on ones business card after having played with ChatGPT for half an afternoon. |
| 67 | |
| 68 | But while ChatGPT, due to its stunningly simple user interface, is accessible to young students who let AI write their homework, or practically to everyone, getting stable and high quality results from various sources without constant intervention is another level. Sometimes, minimal changes to the training data can make a rather big difference. Contradictions between rules and examples are a common mistake when editing and testing the training data, and they tend to bring the AI on thin ice. In some cases, the output can completely fail, because the AI is not prepared to something in the data, and it needs some testing to find out what it is and how to avoid it. |
| 69 | |
| 70 | I hope, this article gave you an impression that Prompt Engineering is more complex than typing something like "Hey, I got some text here, can you write it more clearly?" (even though this alone can yield surprisingly good results). Still, it's neither alchemy nor rocket science. It follows certain rules and best practices and can be learned. |