- 易迪拓培训,专注于微波、射频、天线设计工程师的培养
HFSS15: Known Issues for LSF
Desktop or remote machine has multiple IP addresses, which we don’t support.
HFSS 12.1 is able to perform DSO on 2 machines or lesser. Fails to DSO to 3 or more machines
There are core dump files at the end of job’s running. Results are computed correctly though
HFSS 12.1 doesn’t work with LSF versions 7.0.2 and 7.0.3
On Windows, HFSS should be installed on every machine of cluster
Firewall should be turned off on the machines in the cluster
UAC should be disabled on Vista (only Windows)
Sometimes LSF kills an HFSS job (for e.g. job gets preempted due to a high priority job). HFSS doesn't handle such a situation gracefully resulting in the presence of .lock file in the project directory. User needs to manually delete the lock file before continuing with further analysis.
When an LSF job is killed, the MainWin services (watchdog, regss, and mwrcpss) could keep running. The result is that later jobs cannot start on the machine. The fix is to kill off these processes before starting a new job
Statistics monitoring:
HFSS 12.0: In the case of DSO job, LSF doesn't report the job statistics correctly. The statistics reported correspond to the machine on which HFSS is launched and doesn't take into account the resources consumed on other machines. (will be fixed in HFSS 12.1)
HFSS 12.1: (LINUX only): Statistics are reported correctly even for DSO jobs (as long as the LSF installation supports ‘blaunch’ command)
Analysis fails abruptly when running out of resources (cpu/memory/disk)
The major reason for job failure is due to insufficient resources given to job
Issue is addressed (via graceful failure) in the next release of ANSYS EM products
HFSS 学习培训课程套装,专家讲解,视频教学,帮助您全面系统地学习掌握HFSS
上一篇:IUDOPluginExtension Abstract Class
下一篇:IronPython Script Execution Environment