Entraides et échanges autour de la technologie Scol - Informations and exchanges on the Scol technology
Vous pouvez changer la langue de l'interface une fois inscrit - You can change the language once registered
You are not logged in.
Pages: 1
It's now time for the 1.96 Beta version.
This version will be updated as new features become available.
With 1.96 you can make a full conversational 3D agent using generated 3D characters from the web and an AI model
https://www.openspace3d.com/downloads/OpenSpace3d_BETA
Changes :
#Import models
- Import complete character models from https://avaturn.me/ or any other source with or without separated meshes.
- Import and merge character animations from files included in the same character folder from https://www.mixamo.com or other source
- add options to remove root bones translations if needed.
- add cinematics groups in editor, so when a character with several meshes have the same animation, you can use the cinematic animation directly to read the full animation
#AI
- Use response API to communicate with models
- Make tools calls better
- add several actions or events to manage an agent dynamically
#Speech reco
- manage input gain automatically
- allow several instances for grammar reconition so you can use it as a wakeup word
#Speech TTS
- allow to add custom events in text like [happy 0.5] [dance] to change the character state or animations for example
- allow several instances
- make visemes to match better with the timing.
#PlugITs:
- add blendshape plugit that manage facial animations from standard character shapes. (random blink, facial animations, allow emotions states)
#VM
- optimize the loops to limits the cpu burn specially on Android
- Refactor SO3engine methods on models load
- add basic add regex functions regexReplace "fun [S S S] S", regexSearch "fun [S S] [S r1]", regexMatch "fun [S S] I", regexSplit "fun [S S] [S r1]"
- optimize curl implementation
Offline
Let's download & try the latest version!
Offline
Hi...
For AI, can we use the free Gemini API key for RAG? What about embedding model? ![]()
Cuba lihat di sini https://ai.google.dev/gemini-api/docs/pricing#standard
Gemini 3.1 Flash-Lite
Usage Limits (Rate Limits)Requests (RPM): 15 requests per minute.
Daily Quota (RPD): 1,500 requests per day.
Tokens (TPM): 1,000,000 tokens per minute.
Input Capacity: Maximum 1,048,576 tokens (can process long documents or code).
Last edited by shahbiz8 (Yesterday 03:42:21)
Offline
Hello, chatGPT pluIT now only use the OpenAI response API. Most of the AI models provide such API. https://developers.openai.com/api/refer … s/overview
I don't know about google, I use ollama or other local server for local AI models.
Embedding model is used for rag is used for inital text file content provided in the plugIT editor.
Model memory saved during conversation by the model.
Offline
Better to include Gemini AI as well. Gemini is on the rise. ![]()
Offline
give it a try using https://ai.google.dev/gemini-api/docs/openai
Offline
Cool. Hope it works perfectly. ![]()
Offline
I've tried but it has no response. ![]()
Offline
Pages: 1