= djAI installation and configuration == Preparation //dj//**AI** is easy to install and use. The software consists of two components: * A web service providing an API with various functions. The web service runs on Windows and Linux machines with .net 7 Core. * A client connecting with the web service. The client runs on Windows with .net 7. For single-user application of the **FREE** personal / test / development edition, the web service is typically started on the same machine as the client. However, it is technically possible to run the service on a separate server machine even for the **FREE** edition, and you are allowed to do so, as long as you obey the other usage limitations of the **FREE** license. In multi-user applications (for which you must buy a license in any case, even if they are non-commercial), running the service on a server machine is the common case, and you have the choice between Windows and Linux servers running .net 7 Core. * Download the software to a temporary folder from here: [https://my.hidrive.com/lnk/u8nIg6fD#file djAI service and client download] * Unzip the zip package. * Move the folder {{{djAI}}} to the server machine. * Move the folder {{{djAIClient}}} to the client machine. == djAI web service installation === Option 1: Installation of the web service on the same machine as the Windows client * Install .net 7 Core. * Move the folder {{{djAI}}} to //Documents// or another suitable folder where you have write permission. * The service runs, by default, on port 5000. If you want to change the port, edit the file {{{djAI\run_djAI.cmd}}} and change the port accordingly. Do not change the IP address in local installations: {{{ dotnet djAI.dll --urls http://127.0.0.1:5000 }}} === Option 2: Installation of the web service on a server * Install .net 7 Core. * Move the folder {{{djAI}}} to a suitable folder. * Edit the file {{{run_djAI.cmd}}} in the {{{djAI}}} folder and change the loopback IP {{{127.0.0.1}}} to the IP address you want the service to listen to. You can also change the port, if required: {{{ dotnet djAI.dll --urls http://127.0.0.1:5000 }}} * Configure the service to start on system reboot: * **Linux:** Run {{{crontab -e}}} and add the following line: {{{ @reboot /path/to/djAI/run_djAI.cmd }}} * **Windows:** Use //Scheduled tasks// to configure automatic start. === Configuration of the API secrets Commercial AI models require user validation with credentials or an API secret connected to a user account to allow the model's operator to charge for using the model. //dj//**AI** hides these API secrets inside the web service so neither users nor briefing authors know them. If you have a large group of users, this reduces the risk that some users take the secrets home and use the company's account for their own purposes. Instead, //dj//*AI** asks the users for its own API secrets that users can only use in conjunction with predefined briefings that are normally not very useful for other types of application that they were meant for. In addition, since users only have the keys to an internal service, it's easier to restrict access to that service, for example, to a VPN, and to track usage per API secret. There are two different levels of API secrets: //user// and //briefing author// secrets. The different license models allow a different number of the two types of secrets. User secrets may only use the functions necessary to use the predefined briefings: * List the briefing names. * Use inference with one of the briefings. Briefing authors, in addition, may: * Create, modify or delete briefings. * Obtain a list of models available. * Use inference with a briefing other than the predifined ones, by passing the briefing to be used together with the input data. This is necessary for dynamically created briefings from terminology, ontology or other databases. The **FREE** edition only allows one briefing author secret and no user secrets. That means, you have exactly one user context with unlimited permissions. Apart from not being licensed to run the **FREE** edition in a multi-user context, this feature would make this quite unsafe. The editions with commercial licenses have a defined, maximum number of user and briefing author keys. If you configure more keys than allowed, the server will terminate with an error message. API secrets are configured in the file {{{appsettings.json}}} in the {{{djAI}}} program folder. This is the file content as you download it: {{{ { "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*", "Secrets": { "User": [ ], "BriefingAuthor": [ "7654321" ] }, "License": "" } }}} The secrets are configured in the fields {{{Secrets:User}}} and {{{Secrets:BriefingAuthor}}}. There is one briefing author secret configured, "7654321". Please change the secret to a safe value that's hard to guess. To add more values to one of the key types (for which you need a commercial license), add more api secrets in double quotes, separated by commas, like in this example: {{{ { ... "Secrets": { "User": [ "first_new_secret", "second_new_secret" ], "BriefingAuthor": [ "7654321", "another_briefing_author_secret" ] }, ... } }}} The field {{{License}}} contains an empty string. If you buy a commercial license, you will obtain a license key from us, which is a long, hexadecimal string. Paste the license key into the field to apply the license. The server will then adjust its API secret limits to the parameters of the license you bought. === Configuration of available AI models //dj//**AI** is not, and does not contain, a Large Language Model (LLM) by itself. Instead, it's a flexible interface to standardize and simplify AI briefing and use. The actual AI functionality is provided by external services. Currently, //dj//**AI** supports these models: * openAI: GPT3.5 turbo. We are still on the waitlist for GPT4 and will integrate it as soon as we can access it. * AlephAlpha: base, extended and supreme levels. Model support is done by simple plugins that must implement an interface with very few methods. We'll implement more models as needed (integration of RedPajama running on a local video board is on the way). We'll provide the interface definition with some documentation, so anyone can write their own plugins. In //djAI//, AI models are configured in the file {{{Models.json}}} in the main program folder. The default file looks like this: {{{ { "Models": { "GPT35": { "Description": "GPT3.5 - 4k tokens", "Assembly": "djAIModels", "Type": "AiModels.Gpt35", "ApiKey": "", "Url": "https://api.openai.com/v1/chat/completions", "ChatSummaryTransform": "TBD", "ChatMaxMessagesChars": 1000, "ChatMaxPrefixChars": 1000, "Custom": { "Temperature": 0.9 } }, "AlephAlphaCompletion": { "Description": "AlephAlpha Completion Base", "Assembly": "djAIModels", "Type": "AiModels.AlephAlphaCompletion", "ApiKey": "", "Url": "https://api.aleph-alpha.com/complete", "ChatSummaryTransform": "AlephAlpha - Summarize chat", "ChatMaxMessagesChars": 1000, "ChatMaxPrefixChars": 1000, "Custom": { "Temperature": 0.9, "ModelVariant": "luminous-base" } }, "AlephAlphaCompletionExtended": { "Description": "AlephAlpha Completion Extended", "Assembly": "djAIModels", "Type": "AiModels.AlephAlphaCompletion", "ApiKey": "", "Url": "https://api.aleph-alpha.com/complete", "ChatSummaryTransform": "AlephAlpha - Summarize chat", "ChatMaxMessagesChars": 1000, "ChatMaxPrefixChars": 1000, "Custom": { "Temperature": 0.9, "ModelVariant": "luminous-extended" } }, "AlephAlphaCompletionSupreme": { "Description": "AlephAlpha Completion Supreme", "Assembly": "djAIModels", "Type": "AiModels.AlephAlphaCompletion", "ApiKey": "", "Url": "https://api.aleph-alpha.com/complete", "ChatSummaryTransform": "AlephAlpha - Summarize chat", "ChatMaxMessagesChars": 1000, "ChatMaxPrefixChars": 1000, "Custom": { "Temperature": 0.9, "ModelVariant": "luminous-supreme" } } } }}}}