Version 2 (modified by 17 months ago) ( diff ) | ,
---|
AI middleware: from chat to integration
Lost in inference?
Since November 2022 (not really long ago, is it?) you've chatted with ChatGPT, read articles on AI with prophecies from "we won't have to work anymore from next year on" to "we're doomed", have perhaps rendered a few funny pictures with Stable Diffusion or Midjourney and are now waiting until your local bakery or car mechanic claim to have their services based on AI now. Does that about describe your situation?
To get started with using LLM functionality in a real business context, a few issues must be solved:
- Accessing an LLM over the API and injecting a few-shot-training is normally a job for a software developer, no matter whether it is a cloud-based service like the ones by openAI or AlephAlpha, or one that runs locally.
- Creating the instructions to transform the input data as needed rather requires knowledge of the input and desired output data. This profile is one of a power user or technical writer.
- Few-shot trainings are written in the specific language of the AI model and thus, actually requires the combination of developer and power user profiles.
- Security keys of LLM services should not be widely distributed among users to avoid abuse.
- Few-shot training cycles are unhandy, consisting of writing cryptic JSON code, copying it elsewhere, running the software and trying again.
- A service might work well, but what happens with the data uploaded to it, and which data must not be uploaded?
Note:
See TracWiki
for help on using the wiki.