Entraides et échanges autour de la technologie Scol - Informations and exchanges on the Scol technology
Vous pouvez changer la langue de l'interface une fois inscrit - You can change the language once registered
You are not logged in.
Pages: 1
we are struggling to understand the different coordinate system for kinect plugit's head position and left hand position and have not got a clue. A simple test scence is built as:
kinect device inst --2-- kinect user inst --1-- setvector inst --1--debug console inst
the connection between kinect user inst is: scene kinect user head ->scene set vector inst setvector
the connection between setvector inst and debug console inst is set vector inst.out -> debug console inst print.
if we switch the head and left hand the two coordination system does not match, can we get tip on what are the coordinate system used for these two cases?
BTW, head and Torso position does match.
Many thanks for tip.
Yaozr
I'd like to use 2d component to give feedback about the user's move in front of the kinect, such as when use wave his hand, there is a waving fix-sized 2d circle on GUI accordingly, so far I only find flash way of 2d interface thing but is there a even simpler manner to add 2d GUI component in front of the 3d space, if I wish to avoid using a planner object mesh to mimic 2d component? thanks for tips.
BR/Yaozr
Not knowing the answer...but I am trying to do similar thing so just take it for consideration.
I feel problematic of your setting up, do you mean you want to hold a paper marker so the AR marker plugin help you to pose andy-robot onto the marker, and then you move your body to let the andy-robot follow your action... how do you want to solve the alignment of your body movement with the paper marker?, or do you mean you print a big paper marker and stand on it and move your body, so the expect effect is that andy robot is moving according to your action on top of the marker, in that case, why do you need the paper marker at all? since NITE has calculated accurately your physical position, as what armarker plugin did for estimate the paper marks position.
I think to do "kinect AR" is to use body to control robot (Augmented part) and it match your video part (reality part), I meet other issue so far (see my post).
Hope this helps and maybe with Openspace3d folks help we could get a meaningful KinectAR example.
Yaozr
Confirmed with kinectdemos/skeleton.xos and kinectdemos/object_control.xos, when I switch on RGB mode to check the detail background input, but I have tested with only one kinect...
When using non-mirror mode, say the original video size range is from (0,0) upper left corner to (640,480) bottom right corner, the kinect takes only from (0,0) to around (400, 450) a vertical strip indeed and stretch it to fit the required size, in both 320x240 and 640x480 cases.
When use it in mirror mode, it takes from (240,0) untill (640,450), again a vertical strip and stretch it to fit required size, both qvga or vga cases.
Am I the only one who has this issue?
Yaozr
thank you arkeon!
yaozr
I found the kinect plugin installing package inconvenient for developers, before I install Openspace3d I already have set up all OpenNI/Nite/sensor stuffs, after installation of Openspace3d I realized the kinect plugin was not there and I have to install the kinect addon package, unfortuntely it ask me to uninstall the OpenNI/NITE/sensor, that's fine, after uninstalling and new installation of kinect add on everything is working perfectly till I need to develop, I then realize it just install the redistribute version and it has removed all my include directories, in order to do dev again I have to uninstall OpenNI/NITE again and reinstall the dev version of the two... you see what issue it is like :-)
Would be much easier if kinect plugin is shipped with Openspace3d package, my 5 cents
Yaozr
Many thanks Arkeon for your prompt response! still trying to digest your hint...
Do you mean in Kinect device plugIT it already get all bone positions and this should be enough to drive the Kinect user plugIT directly without an extra camera projection, the effect should be as the original NITE skeleton sample and we can pose sinbad according to bone's 2d position? in that case the sinbad is scalling at each frame to fit the screen image.
Or do you mean after Kinect device plugIT we can add one step of transform joint positions in 3d camera space so that the projection is one front view projection (or orthogonal projection?) and we get a realistic 3d body of sinbad at same size as human subject, in that case the sinbad is keeping the same physical size as human and not scale each frame.
BTW I did my first trail of a clean building of the OpenniSCOL vs2008 project, and failed at starting point of error msg such as in ScolPlugin.cpp "cbmachine ww;" the error says that cbmachine was not defined. I was confused about the project setting since it seems the sdk/include/SCOL/scolplugin.h was never included so the error msg seems make sense, the project include only the other ScolPlugin.h file in project directory, as I did not find the documentation about sdk path I let system environment variable "SCOL_SDK" point to "SVN\scol_sdk\include\SCOL" I hope that is correct? Tips are greatly appreciated.
BR/Yaozr
Many thanks, great job!
This is the actual wanted kinect AR in my view.
In other word, I'm interested in using NITE output as the "armarker" instead of the paper marker, I found two places of the relevant plugins, one is svn/plugins/BitmapToolkit and the other is svn/plugins/OpenNISCOL, my understanding is that I have to create one plugin of my own, say, a ArKinectMarker, and it'll take NITE skeleton as the "marker" and can attach one object accordingly as what "AR marker Sinbad" does in the example, this behavior is not exactly the same as the "object follow" example used in kinect skeleton demo, since I also want the rendered object to match with the input 2d video from Kinect RGB camera, instead of just imitating the subject body's motion in 3d space, as my primary goal is not gaming but AR.
It seems to me that all the elements are there and I need somehow package it as a new plugin into the editor, but I failed to find document on how to customize or add a new plugin to the existing system.
Any shedding of light would be greatly appreciated!
Yaozr
found it :-)
just replace AR capture with Kinect device inst and use it as background and using RGB video mode, also select "detect AR mark".
A newbie here...Could anyone pls give a bit instruction on how to enable this?
Many thanks in advance!
Yaozr
Pages: 1