Most of the function in assigner were designed to be as fast as possible. Using computer with 16GB RAM is recommended. With more CPU and Memory comes faster computation time. If you decide to keep intermediate files during assignment analysis, you will need a large external drive (disk space is cheap). Solid State Drive and thunderbolt cables will provide fast input/output.
For inductors it is the same, but many times the footprints are generic, but the 3D model matches not always. For example have they a different heigth. (I think this is currently not doable, because there is no 3D model assigner)
Where to store the information which partnumber matches with which footprint?
Maybe this information can be stored in a database
found this: Part-DB
So CVPCB access this database and assigns automatically the footprint (and the 3d model which matches)
Maybe in the database is stored only the path to the footprint and not the footprint it self.
Diodes and inductors come in many hundreds of different sizes and it does not make much sense to make these all fully specified schematic symbols. This would not make it any easier. For you it would move the selection of a Footprint in CvPcb, to selecting the right fully specified part when placing each diode on your schematic.
I think the Digikey library has Digikey part numbers in them, but I only experimented with it very briefly. It has lots of duplicates of schematic symbols already in KiCad, and the graphical part of the KiCad symbols looks much better designed.
About the parts database.
His is a repeating subject on this forum, it is repeated a few times each year, and there are multiple attempts of implementing it.
I am not interested in this part, so will leave further comments to others.
At first I assigned part number.
But during time I got many times the question - we can get a part in 12 weeks - you will wait or specify what to use instead of it. So now I minimize the number of such questions by assigning a list of part numbers (as long as I can find).
That is not done in KiCad but in LibreOffice spreadsheet.
Implement the look system using LookProcessor plugins where each could implement its own assignments based on the attribute and shading engine relationships data. This would mean that the current VRayProxy, aiStandin and basic nodes support would turn into a VRayProxyProcesor + ArnoldStandinProcessor and NodesProcessor which could later be extended to a USDProcessor, etc.
Support cross-project shader assignments (e.g. on models loaded through Library Loader), as per #2481. I wonder however how well this will hold up in production with paths from other projects, and whether site sync / project dirmapping will understand, etc. This will grow complex quickly with many edge cases. Would it be worth it?
I would love to know which of these might be high on your priority list of feature requests too - or which of these you think are meh and would be redundant. Definitely point out other ideas or propose solutions or different approaches which might make even some of these points obsolete via a much better solution.
Something to look out for here would be cyclic dependency, but that is something currently missing in validation for, for example pointcaches as well. Ei. you load in pointcacheA then publish (unintentionally) to pointcacheA so now then updating pointcacheA.
How do you imagine this working from a user perspective?
We currently have some geometry swapping for example for aiStandIn. Proxy in viewport and swapped at render time. Is it this kinda thing? Render time look swapping?
This same pipeline worked in Houdini for export and import as well (and could have worked on any other Arnold plugin but we only used those two) and they were able to do all lookdev and author Arnold shading networks from Houdini if they preferred, make the Houdini vanilla shader assignments as they would and the export/publish pipeline would translate the assignments into operators and put these together on a single .ASS file that was interoperable with the one created in Maya, closing the bridge between lookdeving and lighting in Maya and Houdini.
Batch assignment is something I saw already in one pipeline and I like it - with that you can automate loading of assets and at the same time applying shaders, probably something like @fabiaserra mentioned as an assembly.
We are going to try and move this along with an initial plan and explanations of concepts. The tasks we need to cover should focus on moving away from using cbId for matching looks with meshes, in a gradual transition so current productions can use old and new way of assigning looks.
We would replace the existing Assign menu item in the Look Assigner with two separate menu items; Assign by id and Assign by path, so the user can choose which method to try depending on the production.
Asset relationship alternative
Currently finding the looks of interest for a mesh is done through looking up the cbId which contains asset id information. As an alternative to this, we would like to propose using input links to find looks. These input links are already starting to get relied upon in AYON, so they are a good fit finding matching looks as well.
Since links can become quite complex, we have mapped out what it can look like on the Miro board; Miro Online Whiteboard for Visual Collaboration under Asset Relationship Exploration > Link based matching.
The links will get established when publishing anything and will show a dependency between versions.
When publishing looks specifically we would also traverse the input graph to the root, often a model or several models, and establish a direct link from the model(s) to the look. This helps to find branches of looks to present to the user.
When it comes to finding the looks to present to the user, we would traverse through the input links. A breakdown of the link traversal can be found on the Miro board under Asset Relationship Exploration > Link Traversal.
This is a tricky one, if you have a mesh in the scene that is not containerized (for instance after duplication) then we could easily assign a look, but not really find the compatible looks in the first place. On the other hand, you could always just manually find the correct look and use the assigner just for the application of it to selected models.
Not much episodic and feature work on your side I assume. This becomes unviable as the shot number grows and you start pushing new products into an asset even though they belong to a shot. Not a problem with 50 shots (questionably), but a major problem with a 1000
If we were unable to actually use the links across the board in the pipeline and not able to do so efficiently, we might as well not have them. It is one of the most requested features across the board and we will utilize it as much as reasonable. Btw using links for the discovery also allows you to potentially also manually assign looks to models and caches, without republishing. For example if you make a tree with procedural shading and you publish that look. You could then publish 20 more tree models and just mark the first look as compatible with all of them. rather than relying on just what is stored in the meshes metadata itself.
So my XBOX 360 controller detects the LT and RT for gas, and then sticks to steer, but it doesn't detect all of the other buttons in the Keys/Button section. When I press a button to assign, nothing happens
Donc, mon contrleur XBOX 360 dtecte la LT et RT pour le gaz, puis colle diriger, mais il ne dtecte pas dans la section Touches / Touche tous les autres boutons. Quand je presse un bouton pour assigner, rien ne se passe
I recently purchased a Korg PolySix and wanted to add MIDI to it. There are several options available on the web but I wanted to design my own as a learning exercise. I decided to make it plug in as a replacement for the key assigner MCU, so that it could also be used to repair a synth with a bad MCU chip. I felt that all I really needed was Note On/Off, but later decided to add MIDI arpeggiator clocking as well. After I got it working, a friend mentioned that the Poly61 used the same key assigner CPU, so I decided to try my design in that synth as well. I wanted to use through-hole components to make it easier for people to build, which meant that the board would be larger than the 40-pin DIP key assigner and would sit over other components on the Korg PCB. The physical layout of my pcb was geared toward the PolySix, so it was a little tough to get it installed, but I did, and it worked fine in the Poly61 too. So I made a new pcb just for the Poly61. Specifically, it can fit on a KLM-509 pc board, and replace the key assigner CPU, which is either 8049C 384 or 8049C 217 (also used in the PolySix). Here is how it looks installed in a Poly61. (the MIDI cable is not attached but you can see the two-pin connector for it)
d3342ee215