Zemax Led

0 views
Skip to first unread message

Tony Phan

unread,
Aug 4, 2024, 10:10:55 PM8/4/24
to thepirecho
Youdid, however, mention that you had a rather large amount of configurations, making a large merit function, which in turn made your resulting file behave slowly. Though I cannot see your file, I am thinking that one main drag on your system is really due to the large size of the merit function.

For example, even with a very simple file (our sample Cooke Triplet), if I make 25 identical, 'dummy' configurations, and build a merit function that takes into account all of these configurations with high sampling, no axial symmetry assumed, etc (essentially settings that will create as many operands as possible), I can observe similar behavior where even Comment cells take a few more moments to update as compared to an empty merit function file. We can also see that the file size is comparatively much larger:


So, my immediate recommendation is to try and either reduce the merit function size or potentially save as much of it as you can to a .MF file from the Merit Function Editor itself. That way, you could temporarily remove the merit function and make some modifications to your model as needed. I did also see that you're trying to modify the merit function itself, so if modifying the merit function editor is taking an extremely long time, I wonder if you might find some benefit to modifying the .MF file directly (it can be opened and modified in any text editor, but you won't see things like the headers for each column). This is a bit of an extreme step, but with this large of a merit function, I think it's all I can recommend at this moment.


I read six values with PMVA convert them to absolute values (ABVA) and multiply some with a constant factor (PROD) add these to others (SUMM) and than limit this to a constant value (ABLT). This are about 25 lines in the MF per config.


OpticStudio uses multiple threads in optimization at the variable level. So, if you have n optimization variables, it needs n+1 evaluations of the merit function for every optimization cycle, and it is the evaluation of the MF itself that is threaded, not the individual operands. (There are some exceptions, like IMAE and NSTR, but let's leave that.) The MF itself is a single long list of commands, and they are executed sequentially.


For functions like I read six values with PMVA convert them to absolute values (ABVA) and multiply some with a constant factor (PROD) add these to others (SUMM) and than limit this to a constant value (ABLT), there is no way to thread that, as every line depends on the previous line and so must be performed in order.


But, I am concerned about a merit function that is so long that it bogs down the UI...I can't see it being effective in optimization except on a geological timescale. If you can, I'd suggest you email your file to sup...@zemax.com so one of the engineers there can see exactly what's going on. There may be better ways to do what you want, but it's really hard to say without seeing what the file is doing.


Thanks for sharing your system with us. I've been looking over it and I'm not sure there's anything more we can do to address the speed issue beyond what Mark said. Your merit function is very, very long. One of the longest I've seen. And if it needs to be that size, so be it. But with the number of fields, configurations, and the complexity of the system itself, it just takes time to calculate the data.


Mark's idea should bring the problem to a stop. If there's no open merit function, then there's nothing to update. You probably don't need to see the results of any particular line as you're making changes, so this would be a good way to make OpticStudio wait for you until you actually need it. Is this feasible for you?


If the calculation of the merrit function is parallel for every variable (and I have quite a lot) I do not understand why it is calculating in optimization so much time on a single core. The below example is with a reduced merit funktion. Every Cycle the full power is used for about 10 seconds and than it is running on a single core for about 30-40 sec. With the original MF I shared these gaps are stretched to about 5 minutes.


Actually, I don't think that is slow at all: it's very fast in fact. You are trying to find the minimum value of a 176-dimensional space, where each evaluation needs 101,117 separate calculations. The multi-threaded part is the merit function evaluations (all 177 of them) and the single-threaded part of the cycle is the search in 176-dimensions for the lowest value. You're getting one cycle a minute, which I think is seriously good.


I totally get what you're thinking, and you're not being dumb. But not everything can be parallelized. We parallelize what can be done, but the rest has to be done in series. In local optimization we parallelize the n+1 evaluations of the merit function (n is the number of variables, 176 in your case). This gives us the value of the merit function at it's current location in solution space, and its gradient in 176 dimensions. Then we have to find the minimum value in that space...


But the bottom line is, nothing is wrong in your system or particularly capable of improvement on Zemax' end. You're just asking it to do a very big calculation, and it takes time. I hate to pull the 'when I was your age...' stuff, but I can remember when optimizing doublets, triplets etc could take hours, and zoom lens designs took days to weeks.


The best thing I can suggest is to simplify the problem as much as possible, or to take in in a stepwise manner rather than putting everything in at once. Other than that, I think you just have to set the calculation going and leave it to cook.


Zemax is a company that creates and sells optical design software used across several different industries including manufacturing, consumer electronics, aerospace & defense, and more. Their web & customer portal experience had fallen behind over the years leading to Mentor partnering with their team to redesign the zemax.com site and rebuild the customer portal experience by way of a technical architecture rewrite.


The new website & platform experience was designed in order to provide both customers and internal employees a seamless experience when managing licenses, accessing & editing the knowledge base, downloading software, approving purchases, and contributing to the community. This was achieved through a full site redesign alongside a consolidation of all existing third-party platforms to live on one unified Shopify architecture & experience.


It was especially important to consider the purchase flow when rewriting the technical architecture as both customers and internal employees need to interface with multiple platforms in order to make & approve purchases. These flows and technical associations


Once the technical architecture and site IA were considered, it was time to apply our structural recommendations to lo-fidelity wireframes & prototypes to ensure the experience was as seamless as intended. Key experiences focused on included software access & download for customers and license management for internal users.

3a8082e126
Reply all
Reply to author
Forward
0 new messages