30 | | But why not? We could implement a one-shot problem as a dialog, so that the application can learn and improve its reasoning. But this would require some sort of feedback mechanism, either by the user or by another AI instance that validates the previous result and returns feedback if it needs to be improved. Without a feedback mechanism, dialog-type reasoning should rather be avoided, because the subsequent user data will change the AI's behaviour, but not necessarily to the better. |
31 | | Tokens, the unit that is charged for when using the API, are quite cheap (fractions of a Cent for a typical transform), so it's not worth risking data quality to reduce training data upload volume. But with consistent and specific feedback, this can be a way to increase quality. |
| 30 | But why not? We could implement a one-shot problem as a dialog, so that the application can learn and improve its reasoning. But this would require some sort of feedback mechanism, either by the user or by another AI instance that validates the previous result and returns feedback if it needs to be improved. Without a feedback mechanism, dialog-type reasoning should rather be avoided, because the subsequent user data will change the AI's behaviour, but not necessarily to the better. The AI model could start producing wrong output and cultivate this, because it is never corrected. |
| 31 | Tokens, the units that are charged for when using the API, are quite cheap (fractions of a Cent for a typical transform), so it's not worth risking data quality to reduce training data upload volume. But with consistent and specific feedback, this can be a way to increase quality. |
35 | | If openAI granted me a wish, I'd say, I'd like to have an API method to create a context, and either I can give it a lifetime of two hours, or one month, or infinity. Another method would kill a context. The method to create a chat response can either be called without a context (then it returns one response and forgets the context after wards), or with a context, then it can access all that happened before. Nice to have: clone a context to train one with complex data and then create a clone for each request (and then kill the clone request). |
| 35 | If openAI granted me a wish, I'd say, I'd like to have an API method to create a context, and I can give it a lifetime of two hours, or one month, or infinity. Another method would kill a context. The method to create a chat response can either be called without a context (then it returns one response and forgets the temporary context after wards), or with a context, then it can access all that happened before. Nice to have: clone a context to train one with complex data and then create a clone for each request (and then kill the clone request). |