Linux Hfss License

0 views
Skip to first unread message
Message has been deleted

Kathryn Garivay

unread,
Jul 13, 2024, 12:56:47 AM7/13/24
to concbacdama

Can someone share a tutorial of how to run ANSYS-HFSS on a High Performance computing (HPC) machine. Please note that the HPC I am using runs on a linux background. I cannot find any graphical user interface. So i have to use command line. Can someone please share a tutorial of how to run a simple design on the linux based HPC?

Actually I have checked that and based on it i was able to start a simulation. The simulation is of a normal dipole antenna and on a normal PC it takes one to two minutes for completion. when i run the same file on HPC, it has took more than 10 hours, but still the simulation is running. its not giving any error but it should not take so much time. i dont know what is the problem...

Linux Hfss License


Download Zip https://byltly.com/2yLHCu



About second thing, as you said before, you have to use command line, you will have to transfer result files to your computer and view through GUI. I believe Ansys will not allow anyone to view results without checking out a GUI license.

Okay thank you.. let me go through the ANSYS documentation. I am new user in Linux and not familiar with it. There is alot of documentation in the ANSYS help. So i get confused normally as I am not used to it... Can you please share few links from the ANSYS documentation, it can help me alot.

Join us for this insightful webinar to discover how HPC and cloud computing unlocks the full potential of engineering simulation. Whether you are an engineer, IT manager, researcher, or technology enthusiast, this webinar provides the knowledge and tools to enhance your simulation workflows and drive innovation in your industry.

It is fascinating to compare experimental and simulation test results for a tensile test specimen. The ability to conduct tests virtually, rather than physically, utilizing virtual simulations for testing not only increases efficiency but also leads to significant cost savings. In this crosstalk, we will address some of the challenges typically encountered in uniaxial tensile simulations.

ANSYS HFSS software product is aimed for low/high frequency electro-magnetic wave propagation simulations. This software was formerly acquired fro the company ANSOFT and is part of the ANSYS software portfolio since 2008. ANSYS HFSS is a commercial finite element method solver for electromagnetic structures from ANSYS Inc.. The acronym HFSS stands for high-frequency structure simulator. ANSYS HFSS is nowadays a 3D electromagnetic (EM) simulation software for designing and simulating high-frequency electronic products such as antennas, antenna arrays, RF or microwave components, high-speed interconnects, filters, connectors, IC packages and printed circuit boards. Engineers worldwide use ANSYS HFSS to design high-frequency, high-speed electronics found in communications systems, radar systems, advanced driver assistance systems (ADAS), satellites, internet-of-things (IoT) products and other high-speed RF and digital devices.

Further information about ANSYS HFSS, licensing of the ANSYS software and related terms of software usage at LRZ, the ANSYS mailing list, access to the ANSYS software documentation and LRZ user support can be found on the main ANSYS documentation page.

With ANSYS Release 2019.R3 it was made an attempt, to make ANSYS Electronics Software available on LRZ HPC systems, i.e. Linux Clusters CMUC2/3.
Unfortunately the ANSYs Electronics Software is entirely relying on the ANSYS proprietory scheduler ANSYS Remote Solver manager (RSM), which is not compatible with the scheduler SLURM being utilized on LRZ HPC systems. With the direct help from ANSYS developers it was successful for ANSYS Release 2019.R3 to get electronics software solvers like Maxwell-2d/3d and HFSS running on LRZ HPC systems. But this required support from the ANSYS developer team was no longer provided for later releases of the ANSYS software.
Therefore ANSYS Electronics software in Releases later then version 2019.R3 are currently known to run only on local laptops/workstations (mainly under Windows operating system or on Linux with a locally provided ANSYS RSM scheduler). In case of more specific questions on ANSYS Electronics software on LRZ HPC systems, please send your querry to LRZ Support.

Once you are logged into one of the LRZ cluster systems, you can check the availability (i.e. installation) of ANSYS HFSS software by checking availability of a corresponding ANSYS Electronic Desktop module:

One can use ANSYS Electronic Desktop with all its integrated simulation approaches in interactive GUI mode for the only purpose of pre- and/or postprocessing on the Login Nodes (Linux: SSH Option "-Y" or X11-Forwarding; Windows: using PuTTY and XMing for X11-forwarding). This interactive usage is mainly intended for making quick simulation setup changes, which require GUI access. And since ANSYS Electronics Desktop is loading the mesh into the login nodes memory, this approach is only applicable to comparable small cases. It is NOT permitted to run computationally intensive ANSYS HFSS simulation runs or postprocessing sessions with large memory consumption on Login Nodes. Alternatively ANSYS Electronics Desktop can be run, e.g. for memory intensive postprocessing and OpenGL acceleration of the GUI on the Remote Visualization Systems.

It is not permitted to run computationally intensive ANSYS HFSS simulations on front-end Login Nodes or the Remote Visualization Systems in order not to disturb other LRZ users. However, the ANSYS HFSS simulations can be run on the LRZ Linux Clusters or even on SuperMUC-NG in batch mode. This is accomplished by packaging the intended ANSYS HFSS simulation run in a corresponding SLURM script, as it is provided here in the following example.

All parallel ANSYS HFSS simulations on LRZ Linux Clusters and SuperMUC-NG are submitted as non-interactive batch jobs to the appropriate scheduling system (SLURM) into the different pre-defined parallel execution queues. Further information about the batch queuing systems and the queue definitions, capabilities and limitations can be found on the documentation pages of the corresponding HPC system (LinuxCluster, SuperMUC-NG). By default ANSYS is supporting only commercial schedulers (e.g. LSF, PBS-Pro, SGE, ANSYS RSM) for the parallel simulation of ANSYS EM products. But more recently ANSYS EM is providing a first beta-state implementation of scheduler support for SLURM (called: ANSYS EM Tight Integration for SLURM). Based on this still a bit "experimental" SLURM scheduler support, the following parallel execution capability is provided for the LRZ cluster systems.

The configuration of the parallel cluster partition (list of node names and corresponding number of cores) is provided to the ansysedt command from the batch queuing system (SLURM) by the provision of specific environment variables.

Furthermore we recommend to LRZ cluster users to write for longer simulation runs regular backup files, which can be used as the basis for a job restart in case of machine or job failure. A good practice for a 48 hour ANSYS HFSS simulation (max. time limit) would be to write backup files every 6 or 12 hours. Please plan for the setting of wall clock time limit enough time buffer for the writing of output and results files, which can be a time consuming task depending on your application.

Also at this time ANSYS 2020.R1 has been set as the default version, the ANSYS EM Tight Integration for the SLURM scheduler is currently only available for ANSYS EM Version 2019.R3.

Assumed that the above SLURM script has been saved under the filename "hfss_mpp3_slurm.sh", the SLURM batch job has to be submitted by issuing the following command on one of the Linux Cluster login nodes:

Warning: Do NOT use additionally mpirun, mpiexec or any srun command to start the parallel processes. This is done by a MPI wrapper by the ansysedt startup script in the background. Also, do not try to change the default Intel MPI to any other MPI version to run ANSYS HFSS in parallel. On the LRZ cluster systems only the usage of Intel MPI is supported and known to work propperly with ANSYS HFSS.

Hi Arch Users!
I am trying to get an engineering package to work on my arch distro but having difficulty. I installed OpenSUSE in virtualbox and had the same issue, being the licensing program does not work. After some searching online it was down to lsb not being installed. After I installed lsb core in OpenSUSE I was able to get the licensing manager to work.

I would suspect it's not in Archlinux because when I stfw I see that LSB only comes as an RPM; however, it is an ISO standard, so if you stfw some more, you should be able to reproduce enough of the LSB to satisfy your software.

I do not get an error as such, the issue is that the license manager issues an ID that I need to give to software services at Uni to issue my license. That ID is blank. It was blank in OpenSUSE too but after I installed the package lsb the ID was issued (as suggested in some online forums).

The license manager (FlexLM) does not issue an ID (HostID). No error message is seen. The full package is ANSYS Fluent. Ubuntu users had a similar issue (seen at www.cfd-online.com) and resolved this issue by installing a package lsb-core.

I installed OpenSUSE on my virtual box to see if it worked there. Initially it did not. But after installing lsb and re-installing the application the license manager issued an ID, which is what I want.

So hopefully if there is any one out there wanting to run ANSYS Fluent and their other products on Archlinux, install the ld-lsb package from AUR. I will inform the maintainer to include the above mentioned.

ld-lsb 3-6 from AUR repo together with lib32-glibc from multilib and the above mentioned lsb-release and init-functions and a custom rpm-script (from CFD-Online forum [cfd-online.com/Forums/cfx/25236-ansys-workbench-uncertified-linux-distros.html]) solved all my problems which was getting lmutil to work so I can gather license statistics.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages