| 38 | |
| 39 | |
| 40 | == What can I use it for? |
| 41 | To start with, you can run the web service on your local PC. Then, you can use the client as a single-user LLM frontend. You can: |
| 42 | * Configure the AI models you want to use - if you have an account, you can use openAI and !AlephAlpha, and there are some open models with limited API access. |
| 43 | * Create few-shot trainings for the typical data transform applications you have. Then, you can just convert data using the predefined transform as you need it (as shown in the screenshots above). |
| 44 | * Compose multiple, small functions to more complex data transforms. For example, you could first extract some metadata including the language. Then, you could use a transform as the one shown above to extract the instructions - with examples in the correct language determined before. Finally, you could convert the list of instructions to a DITA task topic. Calling the functions from your own software is as easy as sending an HTTP POST request in your favorite environment. |
| 45 | |
| 46 | When you start writing software with our //dj//**AI** integration platform, you will probably start exposing the service to your colleagues in your network and run it on a server. Then, users in the network with credentials can use the AI models configured, but only with the pre-defined transforms. Only admin-level users may create, modify or delete training data. |
| 47 | |
| 48 | In the process described above, you could a training for each language with examples in that language. Alternatively, you could compose the training for the specific request by software and run the training with the data - admin accounts can do that, and it allows combining structured data sources like ontology databases, termbases or data in a CCMS with AI reasoning. |
| 49 | |
| 50 | In the example described above, the system could generate data by searching a similar DITA task in the CCMS, converting it to HTML, using the ordered list thus created and using the mapping from HTML to DITA as training example. Particularly !AlephAlpha in the larger versions is very good at creating high-quality results with only one or two good examples. |
| 51 | |
| 52 | == What happens with my data? |
| 53 | Many people associate LLM 1:1 with openAI and ChatGPT. While even GPT3.5 yields very good results in such transforms as explained above, there is the question whether the data is used for training (this can be opted out), but also where the servers are and which laws apply. From an EU legal perspective, handing personal data to openAI is a no-go, and trusting intellectual property data to it might cause headache. |
| 54 | |
| 55 | On the other hand, the model is large and has a very good reasoning. Our experiments with open models including Raven 14B and RedPajama 7B showed evidence that these models are very restricted compared to GPT3.5, because they are much smaller. There are larger models, like Bloom, but they require significant (and thus expensive) hardware. |
| 56 | |
| 57 | We found the German company !AlephAlpha and their AI model called //Luminous// to provide a solution for these issues. They offer servers based in Germany and thus under EU law, and running on premise is also possible. The reasoning of at least the extended version of the model is very good, and the tests showed no notable drawbacks compared with openAI. It seems to be very good at learning patterns from sparse examples, but it should be, because compared to openAI with models allowing token lengths up to 32k, it is quite restricted with 2k, which is half of the 4k of basic GPT3.5. |
| 58 | Still, some meaningful trainings worked right away with Luminous, and it can be used for commercial applications with sensitive data without restriction. |
| 59 | |
| 60 | For the time being, //dj//**AI** supports GPT and !AlephAlpha, but supporting other models is easily done by writing a plugin class that fulfills a simple interface. Apart from sending HTTP requests to the model, the main task is, translating the abstract training syntax to the model's native syntax. |