Changes between Version 4 and Version 5 of Public/WhitePaperAiBriefing


Ignore:
Timestamp:
Jun 2, 2023, 12:38:02 PM (18 months ago)
Author:
Boris Horner
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Public/WhitePaperAiBriefing

    v4 v5  
    8080It's a small fraction of the data normally used for training, but if the operation is not too complex in its details, it works quite well and fits conveniently into the 4k token limit of many models. It works due to the much better text understanding and reasoning capabilities of today's models, few examples are sufficient to learn the relevant patterns.
    8181
    82 To make this easily usable in all types of applications, we've written a web based API to provide such a system with briefing capabilities.
     82> To make this easily usable in all types of applications, we've written a web based API called **djAI** to provide such a system with briefing capabilities.
    8383
    8484[[Image(Public/ImageContainer:djAI.jpg, align=center, width=50%, margin-bottom=20, link=)]]
    8585
     86**djAI** can hold plugins for different models and organize briefing data for them, both for chat and one-shot applications. From the calling application, only the input text and the name of an operation (like "structure confusing text") is passed, and **djAI** returns the response. The calling application does not need to know which AI model and briefing data is behind, nor can users even see the briefing.
    8687
     88Since the calls are so simple to integrate (anything able to send an http request can use it), it's also easy to compose single LLM actions to more complex operations. We have a demo based on this technology summarizing the above text and then create a valid DITA map out of it.
     89
     90== Some things we're still working on
     91The software is a great help already, but there are still some gaps to fill:
     92* Chat is not yet implemented. It basically works like single-step operations, but it must maintain the history in a user context and re-inject it to the AI.
     93* Currently, briefing data must be provided in the model's native format, for example, a certain JSON structure for ChatGPT 3.5. We will provide an automatic conversion from DITA (with certain conventions) into the model's native format. DITA can then be edited with an XML editor or a specific briefing data editor.
     94* Optional integration with [wiki:Public/StartPageCinnamon Cinnamon] or other CMS / DMS systems would enable running the briefing development from there, including versioning and lifecycles on the briefings.
     95* Permission and logging system to restrict use of certain functions and keep track of API cost.
     96
     97
     98== Conclusion
     99The approach has many advantages:
     100* It hides briefings from users who just want a result.
     101* It hides software details from training designers.
     102* It hides AI models API secrets from the users, they use local secrets instead.
     103* It standardizes briefings, users just pass data.
     104* It is flexible - writing a briefing is done at a small fraction of the time and cost of training.
     105* It can use various models, each for its area where it performs best.
     106
     107I also believe that this approach becomes more and more important and remove the need for training in most cases:
     108* Today's leading LLMs have impressive logical and linguistic capabilities (including software code) and are "smart enough" to learn new tasks from a small briefing.
     109* Upcoming models will have better reasoning and make more out of the same briefing data.
     110* Upcoming models allow larger token lengths, and thus, more briefing data (for example, GPT 3.5 was limited to 4k tokens, whereas GPT 4 has an option of 32k tokens, or eight times the text length).
     111* Software solutions like **djAI** allow breaking down tasks into small steps, making them more accessible to the briefing approach.
     112
     113What do you think? [wiki:Public/GenContact Please let me know about your thoughts].
     114
     115