Ifyou load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
Weird is, the faces appear as coplanar before, I guess TCV must try to triangulate them by itself, similar to exporting a collada or dxf file (a TrimBim I suppose?). and since in my case, the geometry was complex with a lot of thin elements, it gave up.
@TheCGO am I warm ? since TCV is originally a Trimble Connect tool, does it convert our SKP file into a TrimBim and triangulate on the fly ?
Attached is a chunk of my file, if you want to have a look at the problematic faces some day.
Here is a quick example, you can find other indices with it. Add this script to your face prefab. Also, add ParticleSystem to visualize mesh points. This is a dirty hack and there is no guarantee that face indices will not be changed in future versions of ARCore.
AR Foundation setup :
Add face Manager
Create new AR Default face and add your script and assign the same object as Face and debug particle
create a prefab of Face and assign in face manager.
Run in device
Everything I have found in the forum is over a year old so hopefully starting a new topic will bring some new interest.
I have the HolidayCoro singing light bulbs and snowman, looking for someone that has a nice visualizer prop file that will help me sequence these characters in the LOR Sequence Editor. I am more than willing to pay for a good prop file that includes all 8 channels for the snowman and another prop file with all 8 channels for the singing light bulbs. The prop file should be a complete image of each character with the proper channels assigned and lights drawn as in the HolidayCoro instructions. Even something that is close that I can modify will be good at this point.
I would give you the files if I had them but those props I don't have files for. You can lay out props like that in a couple of different ways, make a grid in Excel overlay the picture and number the different channels then use that file for import. I use Xlights and I've used this method successfully several times. Pixel editor is LORs version of Xlights so I think it should act the same but I don't have experience with pixel editor. With that in mind xlights has a way of mapping a prop using a camera, you could use Xlights tool to build your prop then import it. They also have a tool in there to Overlay your picture in a grid to lay out your channels which I think works better than excel. You could also email holiday Coro and see if they can provide you with the files.
No need to pay, there are people here or used to be here that share them. I dont have the bulbs (props), you dont really need any more than one of the HC prop files if you are talking about the mini leds/ incans. Tree, Bulbs, pumpkins all have 8 channels. I have every HC incan prop and I only have 3 prop files , pumpkins (not HC but they are custom), trees, Rudolph. I just use what I have.
Thank you Everyone
I have contacted HolidayCoro and they have told me that they are working on LOR prop files for all of their singing characters, no projected completion date so far.
Checked LandoLights.com a few times, nothing there that I can find is close enough to use for my characters. I have downloaded a few but are not what I am looking for. I have found some singing tree props that may work for the light bulbs with a little modification.
I still have to check out the Object Creator, that may be the best answer. Thanks again for the great replies !!
Good luck with HC, love them to death but they put watermarks on their clip art making it impossible to use object creator. Ask me how I know. I gave up on my "singing santa" and used my reigndeer head. Same channels.
Heard back from Dave at HC, they are trying to figure out if they should create a Visualizer File, Visualizer Prop File or a Pixel Editor Prop File. All I really need is a good Visualizer File with an image of the snowman and the fixtures drawn on for each channel. The visualizer file can be used in the pixel editor or the sequence editor, works for me. Would there be a benefit to creating the prop files rather than just the visualizer file? I still have so much to learn .....
I have the pair that I recently completed for mine and used JR's order. I'll post a link for the file in the "Now Requesting Requests for Singing Faces" thread shortly. Don't at the moment have access to the file.
Sent from my Galaxy Note 5 using Tapatalk.
Now I only need to get the singing snowman and everything will be set for this year's display. Even a good image of the singing snowman that I can use as a background for a visualizer file would be great, I could draw the fixtures on myself and create my own prop. I am more than willing to compensate someone for providing me with a file that I can use and saves me the time of trying to create one myself
We don't do anything for Halloween so don't need the monsters, thanks for the offer. I'm sure other folks could use them. Don't know how anyone can do a Halloween display and a Christmas display, the Christmas display is already more than I can handle every year
For my CCLAB Arduino Final we have to make something that has both input, output, and also inject some personality into it. So I made this music visualizer that will show smiley face as long as the music is not too loud, and gives annoyed face whenever the surrounding music is too loud.
The first experiment was conducted by following this tutorial and modify it by adding DigiPixel library to show both smiley and annoyed faces on its screen. Sound visualizer displayed on LED Matrix, and the emoticon is displayed according to the beat in DigiPixel.
Next, I did the opposite by displaying Sound visualizer on the DigiPixel, and the emoticon is displayed in the AdaFruit 1.2" 8x8 LED Matrix. I follow this Tiny Arduino Music Visualizer tutorial and got stuck in translating how to map the variables & function to work on DigiPixel. It was before I realized that there are examples sketches from DigiPixel library that's called DigiPiccolo! That made my life so much easier, I modified the code from there.
Utilizing potential muscle sensors, an electro-stimulator and software, MANABE attempted to discover if the facial expression of one person can be copied onto another, and if so, how the process and the results can be presented as documented images and as a performance. Many experiments were performed in the production process, such as the face was treated as input / output device to generate music from the facial expression, or vice versa.
I want to create a schedule that picks up the Net Surface Area of the Outside Face of a wall. The problem I'm having is the area calculation works for a single skin wall however as soon as the composite has 2 or more skins the calculation considers each skin as another wall. For example, a 2m x 3m wall with 4 skins is showing as 24sqm instead of 6sqm. I've attached a demonstration. Has anyone else had this issue? Or have I selected the wrong thing somewhere?
I realised earlier this arvo it was because the schedule had been duplicated from a component. As soon as I set it up in the element one I was sweet! I should've spotted that before . Thanks for following up!
So both artists used technology to let their faces display emotion. It sounds a little backwards. The only thing that computers can not feel are emotions. It is probably one of the biggest distinction between man and computer. The artist both seem to explore human emotion and how this is displayed on the face. The data typical characteristics for the emotions are digitalized and reproduced. N. Katherine Hayles states about this informatization of the self (virtual body):
3a8082e126