- 易迪拓培训,专注于微波、射频、天线设计工程师的培养
HFSS15: Large Scale DSO Deployment/Configuration
LINUX Cluster configuration
Shared drive for projects: Cluster must provide a shared drive that hosts job inputs - the submitted project must be located on a shared drive (for e.g. a sub-folder of user’s home directory). The shared-drive must accessible using the same path on every node of cluster
'Temp directory' configuration
Temp directory is either on ‘local storage’ or on storage that has equivalent speed characteristics i.e. the I/O rates of the storage should be invariant to network traffic
Temp directory on a host has sufficient space to hold results database for the variations that are solved on it. Note:
This storage is freed at the end of the analysis
The amount of required space depends on the number of engines per node and the cumulative variations solved on this node
The amount of required space depends on the project’s compression-options. For e.g. if ‘Save Fields’ of a parametric setup is OFF, the space requirement is smaller by the amount of space taken up by field solution data
Ansoft RSM environment: In the case of supported scheduler environments, there is no extra configuration needed. In the case of Ansoft RSM environment, following additional steps are needed:
Ansoft RSM must be running on all the nodes of cluster. The credentials of ‘RSM service’ allow read/write to shared drive. Reason: the remote engine processes are launched using the credentials of RSM service
Registration of 'desktopjob.exe' with RSM service: ‘desktopjob’ program must be registered with Ansoft RSM using 'desktopjob -regserver". To ensure that the registration is successful, check that the ‘desktopjob’ entry in '<RSM-installation-folder>/AnsoftRSMService.cfg' file is valid.
Note | LINUX specific critical note: Edit AnsoftRSMService.cfg and replace ‘desktopjob.bin’ with ‘desktopjob’ |
Major limitation: In the Ansoft RSM environment, Large Scale DSO can only be enabled for one product.
Troubleshooting hints (Ansoft RSM environment only): “shared drive read/write” requirement is a new constraint introduced in Large Scale DSO. So if user runs into a situation where Regular DSO jobs run and Large Scale DSO jobs fail, one possible cause for the failure: RSM service does not have privileges to read and write to project folder located on shared-drive.
Windows Cluster configuration
All the above steps apply, except for steps that are stated as LINUX-specific. Additional instructions:
Ansoft RSM and Ansoft products are either installed locally on each node of cluster (i.e. local installation) OR installed on a single shared-drive available to all nodes of cluster (i.e. network installation)
Registration of 'desktopjob.exe' with RSM service:
Network installation: desktopjob.exe is registered with RSM service once, on any of the nodes of cluster
Local installation: Since each node has it's own RSM installation, desktopjob.exe must be registered with RSM on each node.
Note | IMPORTANT! Ansoft RSM service must be started using the credentials of a non-system 'admin' account, which has read/write permissions to project's shared drive. If RSM service runs as 'system' user, large-scale-dso jobs will fail |
Heterogeneous Cluster configuration
Limitation: Currently heterogeneous cluster (with both linux and windows nodes) is not supported. This is due to the shared drive requirement.
HFSS 学习培训课程套装,专家讲解,视频教学,帮助您全面系统地学习掌握HFSS
上一篇:Job Management User Interface for LSF
下一篇:GetInputUDSParams(List<UDSProbeParams> udsParams,