Re: [OpenNI-dev] Digest for openni-dev@googlegroups.com - 6 Messages in 6 Topics

92 views
Skip to first unread message

George Toledo

unread,
Dec 5, 2012, 4:25:57 PM12/5/12
to openn...@googlegroups.com
Is creating a production node that inputs video texture from some arbitrary external source supported?



On Dec 5, 2012, at 12:48 PM, openn...@googlegroups.com wrote:

Group: http://groups.google.com/group/openni-dev/topics

    Lorne Covington <mediado...@gmail.com> Dec 05 01:45PM -0500  

    This looks like a very interesting camera for near-distance work (15cm -
    1m):
     
    http://software.intel.com/sites/default/files/article/325946/creativelabs-camera-productbrief-final.pdf
     
    It's a Time-Of-Flight (TOF) depth camera, with a 1280x720 visual camera
    and dual mics.
     
    Although it uses different tech, it should provide a point-cloud and
    depth image much like a Primesense, but with NO shadow in the depth image.
     
    Anybody have one yet? Mines been shipped, and there's a plugin to use
    the Intel SDK for it with vvvv which is how I will initially test it.
     
    Anyone know if thee cameras will be integrated into NITE and OpenNI? It
    would be nice, as using this camera WITH a Primesense camera would give
    excellent depth coverage.
     
    Ciao!
     
    - Lorne
     
    --
    http://noirflux.com

     

    Amal Raj <raj....@gmail.com> Dec 05 10:17AM -0800  

    Hi all,
    I am a newbie. I got a Xtion Pro sensor recently. Will OpenNI Human
    detection and tracking work when we place the sensor close to the ceiling
    and looking down like an CCTV camera ?. If no, what are some good
    algorithms that I can implement to get it working.
     
    And How does the Human detection work in normal scenarios ?. Any good
    algorithm that I can learn from ?.
     
    Thanks
    Amal.

     

    Zafar Ansari <zafar...@gmail.com> Dec 05 04:26AM -0800  

    Thanks Vahag, but this dataset has very few samples of ASL. It is a more
    general purpose dataset. I am now thinking of making a dataset of my own.:)
     
    On Tuesday, 4 December 2012 11:18:47 UTC+5:30, Vahag wrote:

     

    generalh <gene...@noos.fr> Dec 05 04:25AM -0800  

    Hello,
    I'm trying to use kinect and i got an "illegal instruction" answer as soon
    as try to run
     
    DAVWS2:x64-Debug generalh$ ./Sample-NiUserTracker
    Illegal instruction
    DAVWS2:x64-Debug generalh$ ./NiSkeletonBenchmark
    Reading config from: '../../Config/SamplesConfig.xml'
    Illegal instruction
     
    But it works like a charm for
     
    DAVWS2:x64-Debug generalh$ ./NiViewer
     
    DAVWS2:x64-Debug generalh$ ./Sample-NiSimpleRead
    Reading config from: '../../Config/SamplesConfig.xml'
    Frame 1 Middle point is: 556. FPS: 0.000000
    Frame 2 Middle point is: 561. FPS: 37.294646
    Frame 3 Middle point is: 560. FPS: 35.223670
     
    So my system is OSX 10.6.8 32bit
    libusb-1.0.9
    OpenNI-Bin-Dev-MacOSX-v1.5.4.0
    NITE-Bin-Dev-MacOSX-v1.5.2.21
    Sensor-Bin-MacOSX-v5.1.2.1
     
    In fact i'm trying to use an external jit.OpenNI from Diablodale in Max 6
    but our debug research brought us to a highlight a nite problem.
    We have other Max user for who this is working like a charm but for me....
     
    So if anyone could tell me how to track and find out my buggy bug...
     
    here is a crash log when trying to use jit.openni if this could help
    Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
    Exception Codes: 0x0000000000000001, 0x0000000000000000
    Crashed Thread: 0 Dispatch queue: com.apple.main-thread
     
    Thread 0 Crashed: Dispatch queue: com.apple.main-thread
    0 ...nVHandGenerator_1_5_2.dylib 0x1deef021
    NACommonData::Downscale(unsigned short const*, unsigned short*, int, int,
    int) + 241
    1 ...nVHandGenerator_1_5_2.dylib 0x1deeef1d
    NACommonData::Downscale(xn::DepthMetaData const&, xn::DepthMetaData&, int)
    + 173
    2 ...nVHandGenerator_1_5_2.dylib 0x1def088a
    NACommonData::Initialize(xn::DepthGenerator const&, Resolution,
    xn::ImageGenerator const*, xn::IRGenerator const*, int) + 1162
    3 ...nVHandGenerator_1_5_2.dylib 0x1dea60ac
    NHAFocusBackgroundModel::Initialize(xn::DepthGenerator const&, std::string
    const&, unsigned int, unsigned int, unsigned int) + 268
    4 ...nVHandGenerator_1_5_2.dylib 0x1deb1cb7
    NHAGestureRecognizerManager::Initialize(xn::DepthGenerator const&,
    xn::DepthMetaData const&, std::string const&, unsigned int) + 903
    5 ...nVHandGenerator_1_5_2.dylib 0x1de80e0a
    XnVGestureGenerator::XnVGestureGenerator(xn::Context&, char const*, char
    const*, xn::DepthGenerator) + 1578
    6 ...nVHandGenerator_1_5_2.dylib 0x1de7f8c4
    XnVExportedGestureGenerator::Create(xn::Context&, char const*, char const*,
    xn::NodeInfoList*, char const*, xn::ModuleProductionNode**) + 516
    7 ...nVHandGenerator_1_5_2.dylib 0x1de8d7ff
    XnVExportedGestureGeneratorCreate(XnContext*, char const*, char const*,
    XnNodeInfoList*, char const*, void**) + 159
    8 libOpenNI.dylib 0x1d6ed547
    XnModuleLoader::CreateRootNode(XnContext*, XnNodeInfo*, XnModuleInstance**)
    + 183
    9 libOpenNI.dylib 0x1d6f8bee
    xnCreateProductionTreeImpl(XnContext*, XnNodeInfo*, XnInternalNodeData**) +
    606
    10 libOpenNI.dylib 0x1d6f8ad8
    xnCreateProductionTreeImpl(XnContext*, XnNodeInfo*, XnInternalNodeData**) +
    328
    11 libOpenNI.dylib 0x1d6f897c xnCreateProductionTree + 44
    12 libOpenNI.dylib 0x1d710cbb
    xnConfigureCreateNodes(XnContext*, TiXmlElement const*, XnNodeInfoList*,
    XnEnumerationErrors*) + 779
    13 libOpenNI.dylib 0x1d7114d4
    XnXmlScriptNode::Run(xn::NodeInfoList&, xn::EnumerationErrors&) + 84
    14 libOpenNI.dylib 0x1d6e0c9c __ModuleScriptRun(void*,
    XnNodeInfoList*, XnEnumerationErrors*) + 108
    15 libOpenNI.dylib 0x1d6f5989 xnScriptNodeRun + 217
    16 libOpenNI.dylib 0x1d6f5662
    xnContextRunXmlScriptFromFileEx + 194
    17 com.hidale.jit-openni 0x1c8f5f37 jit_openni_init_from_xml + 179
    18 com.cycling74.Max 0x000b8aa5 object_method + 963
    19 com.cycling74.MaxAPI 0x0291933b object_method + 139
    20 com.cycling74.JitterAPI 0x1bf48474 jit_object_method + 118
     
     
    Thank you
    Cheers
    generalh

     

    RSAbg <benedik...@researchstudio.at> Dec 04 11:33PM -0800  

    Hi
     
    the problem you mentioned, that people had to move back and forth to
    calibrate is already the key point of your problem.
    This is due to the fact that to be able to recognize people in a scene,
    openNI needs some initial movement of the subjects.
    As soon as point clouds are classified as persons, the calibration can
    start.
     
    since you only started the recording AFTER the people were recognized, (I
    assume) they don't move, so are not recognized...
    if you had started recording from the beginning, this would have worked
    from the recorded files the same way as from the live video feed.
     
    i'm sorry i have no idea of how to help you with your problem except that
    you need to recapture those videos...
     
    Benedikt
     
    On Friday, November 30, 2012 5:28:14 PM UTC+1, David Menard wrote:

     

    ilyes issaoui <ilyes....@gmail.com> Dec 04 06:03PM -0800  

    Hello every body,
     
    > I'm new in kinect, I tested your code with VS2010 but I have this error
    LINK : fatal error LNK1123: échec lors de la conversion en fichier COFF :
    fichier non valide ou endommagé
     
    I installed OpenNI 1.5.4, and I linked it with visual
    I have windows 7 32bit
    Can you help me please?
    thank you :)

     

You received this message because you are subscribed to the Google Group openni-dev.
You can post via email.
To unsubscribe from this group, send an empty message.
For more options, visit this group.

--
You received this message because you are subscribed to the Google Groups "OpenNI" group.
To post to this group, send email to openn...@googlegroups.com.
To unsubscribe from this group, send email to openni-dev+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/openni-dev?hl=en.
Reply all
Reply to author
Forward
0 new messages