Entraides et échanges autour de la technologie Scol - Informations and exchanges on the Scol technology
Vous pouvez changer la langue de l'interface une fois inscrit - You can change the language once registered
You are not logged in.
Hello,
Yes you can play 3D side by side video in VR.
You can also apply the 360 video on a 3D sphere and see it in VR.
Using the video plugIT apply the video on the sphere material. you can start using the 360 templates : (right click in groups tab, choose a template)
there is no events on video timeline so if you want to add elements in screen, you should use a timer maybe.
It's now time for the 1.96 Beta version.
This version will be updated as new features become available.
With 1.96 you can make a full conversational 3D agent using generated 3D characters from the web and an AI model
https://www.openspace3d.com/downloads/OpenSpace3d_BETA
Changes :
#Import models
- Import complete character models from https://avaturn.me/ or any other source with or without separated meshes.
- Import and merge character animations from files included in the same character folder from https://www.mixamo.com or other source
- add options to remove root bones translations if needed.
- add cinematics groups in editor, so when a character with several meshes have the same animation, you can use the cinematic animation directly to read the full animation
#AI
- Use response API to communicate with models
- Make tools calls better
- add several actions or events to manage an agent dynamically
#Speech reco
- manage input gain automatically
- allow several instances for grammar reconition so you can use it as a wakeup word
#Speech TTS
- allow to add custom events in text like [happy 0.5] [dance] to change the character state or animations for example
- allow several instances
- make visemes to match better with the timing.
#PlugITs:
- add blendshape plugit that manage facial animations from standard character shapes. (random blink, facial animations, allow emotions states)
#VM
- optimize the loops to limits the cpu burn specially on Android
- Refactor SO3engine methods on models load
- add basic add regex functions regexReplace "fun [S S S] S", regexSearch "fun [S S] [S r1]", regexMatch "fun [S S] I", regexSplit "fun [S S] [S r1]"
- optimize curl implementation
Hello,
could you share an example in pkos export ?
ok thanks strange ^^ it should also works.
ok thank you, strange also because stand alone version works the same a the installed version. maybe the issue can be in your user folder name.
Hello what voice / language do you use ?
On windows go to sound settings, check for the choosen device for audio output / Input.
Test audio output with a simple sound plugiT.
Check that your Openspace3D parents folder (if you use the portable version) don't have any special char or a too long path.
Yes very old integrated graphic cards like intel GMA support DirectX better
But this is history ^^
OpenGL is more widly supported. I just tried to debug this issue again and get no info or code exception, maybe it's an Ogre3D limitation using directX 11 and multi viewports. So I should recommend to export in OpenGL since all graphic cards now have a very strong support for IT.
Why do you need directX ?
This is a know issue of directX, it Should work correctly using OpenGL.
I can't reproduce the issue.
Are you using install version of OS3D or portable version ?
Rendering is set to OpenGL or DirectX ?
Does your graphics drivers up to date ?
What is your graphic card ?
Can you make a pkos export So I could check the issue is not in your models ?
Hello, can you give a scene example showing your issue ?
On which platform are you exporting ?
Hello,
1.95 is finally out!
It's mostly focused on AI and ergonomics.
Try to make your 3D AI agent combined with Speech reco / TTS, gives the ai model tools to adapt the 3D model animations, change the scene environment, move an object...
You can make easy local AI model tests using a local server like ollama https://ollama.com/download/windows and start using AI models for free into Openspace3D.
1.95.0 - 12/18/2025
PlugITs:
- Speech recognition : allow to download lang models
- Chat GPT : add memory, default knowledge text file, default tools like web search, web fetch, get time / date, memory on need. Refactor the whole plugIT code.
- Custom interface : add chat component, add font settings for text input.
- Update chat gui also to use the new chat component
Editor:
- V3DUI : add chat component, correction on margins, correct priority order
- Tools : add curl functions to read data and header correctly, add functions to cleanup html
- Keep android export password during session
- Add a menu on imported groups to allow to reload / refresh from original xos file
- Add an option or used with "Alt" key to stick an object bounding box to the nearest when moving in a direction.
- Add scene rendering parameter to set a default IBL dds image to illuminated PBS materials
- New school class library elements, rooms and objects in the models library.
Core / Scol:
- VM : add _ExtractzipArchive _ListzipArchiveSubDirs _ListzipArchiveFiles functions
- SO3Engine : add function to set default IBL texture used by PBS materials, correct physics hinge joint, refactor shadows and make them a bit better.
- Android : add 2D API function to get softkeyboard height and visibility , support Android API 28 and 16k paging
- Curl api, make sure we can cancel a request properly.
- Update openXR SDK, correct aim position on hand / controller switch.
I will investigate about this for future releases. From what I saw each platform (Meta quest or Pico) use specific SDK for this, so exit the common openXR setup
Maybe it's will be added in next openXR sdk.
Hello,
This is the first beta of version 1.95.
This version will be updated as new features become available. You'll be able to test it soon and share your feedback with us.
https://www.openspace3d.com/downloads/OpenSpace3d_BETA
Beta 3 is ready!
Changes :
- Custom interface now allow to set the font setting of an input control
- a new "chat" control with sms bubble style is available in the custom interface plugIT
- the chat gui plugIT new use this chat control, more options allow to use is directly with an AI agent for exemple.
- the chat control can be customized using the theme editor.
- A "snap" to object is now available when moving an objet slowly in the 3D view using the "Alt key", if the axis met an objet in a small distance, the object is sticked to the surface matching the bounding box of the objects.
- Default IBL cube map (DDS format) can be set to apply on all PBS materials in scene rendering settings.
- Add functions to get the Android softkeyboard state and height
- apply Android softkeyboard height as offset for UI fields
- Update minimum Android version to API 28 (Android 9) and NDK 27.3
- Manage support of 16k page size needed by android https://developer.android.com/guide/pra … izes?hl=en
- Upgrade the ffmpeg version from 4.2 to 7.1
- new assets library for school classroom along with a /asset/sounds library
- rewrite lispsm shadows shaders
Hello, unfortunately there is no way to use marker detection on meta quest. The camera stream is not accessible because of the meta restrictions. It seems that the last PicoXR SDK allow to detect and track markers on pico 4, but I didn't dig into this yet.
Hello, yes some months ago you could still access old scol sites like planetis3d on my server using scol voyager x32. But since the server change, it's not possible anymore.
On demand I could try to make a service online
For a little time when I have time on September.
Hello,
The new Openspace3D release is here!
https://www.openspace3d.com/en/openspace3d-1-94/
Most signifiant changes are:
PlugITs:
- OpenXR plugIT manage touch interfaces better with hand gesture or controllers
- FPS controller plugIT allow to attach the camera on a parent object
- Chat GPT plugIT now allow the use of tools functions so the AI agent can trigger OS3D links or retrieve informations from the app. Along with the ability to use embed models with a text document.
- Custom interface adds more control and advanced events. Automatic 2D to VR conversion
- Ramdom output plugIT now never send the same result twice
- Add download models in speech recognition plugit editor
Editor:
- Add features in the Theme editor, with padding, multiple fonts and sound on elements
- Add Fit, fill and cover bitmap mode for theme and interfaces element.
- Update V3DUI for better UI and VR management, add elements padding, enable interfaces sounds and inputs cursors
- Add an accessibility template for VR controlling a wheel chair
- Links editor: add tabs to show only links from source or destination module
- Add exports scripts for Apple OSX to buid and sign package on a Mac
- Window export now compute an .ico file from the given icon picture
- Update defaut theme
- Updated documentation
Core / Scol:
- Android update to API 35
- Linux / Android correct utf8 conversion
- OSX export for x86_64 and arm64
- Use Ogre3D GL3+ renderer and SDL2 on OSX
- add USM.ini option "developer_mode yes" to enable VM call stack infos on error without the need of RELEASE_DEVELOPER special build
- Correct some bugs
- Update some dependencies
A simple custom chat interface template can be made, so users can use the template as a start.
About vosk to train a malay model you can check this https://github.com/alphacep/vosk-api/tr … r/training
For the release coming next week or end of the week, I will add some small missing features to the custom interface to allow the create a simple chat interface with it (adding "add text" along set text content).
Also I will add a possibility to add the language to speech recognition from https://alphacephei.com/vosk/models maybe the same way it works to add languages for Speech plugIT.
Yes every user input is sent to the embedding model and the result is used to search for the corresponding context in the database. Then the user input and context is sent to the ai agent.
You're right I forgot to add them in the beta ^^
https://www.openspace3d.com/rsc/samples/generic_ai.zip
unzip in assets\templates
Tested with ollama using https://ollama.com/library nomic-embed-text and qwen2.5:0.5b, it's really good to give a context to the ai agent and make it stay in that context.
Hey ok, so It's available in Beta 3 ^^
in the plugIT parameter choose the embed model and the text file that contains the knowledge data.
The weights databse is generated in openspace3d/tmp/ai/.
The database is erased when you change the text file in the plugIT editor and will be regenerated on app startup.
This way you can distribute with the already computed database.
Then ask the ai agent about something in the file.
Upload / Document on openai use the assistant API not the chat completion API used by the plugin and wider supported by other AI tools.
https://platform.openai.com/docs/api-reference/uploads
What are your needs exactly for this ?
Bonjour,
je vous ai en effet répondu par email. Quel est le problème exactement ?
Si CloudCompare permet l'export en ply cela devrait fonctionner dans OS3D.
il y a t'il un message d'erreur lors de l'import ?