- 易迪拓培训,专注于微波、射频、天线设计工程师的培养
HFSS15: Interconnects for HFSS Distributed Memory Simulation
To obtain the best possible performance we recommend the use of a network interconnect that supports communication speeds greater than 1000MB/sec or higher. Some high performance interconnects plug into a PCI (Peripheral Component Interconnect), PCI-X (extended), or PCIe (PCI Express) slot on the system.
HFSS-IE 14 supports the following network interconnects:
Platform | Interconnects |
Win32 | Ethernet/GiGE |
Win64 | Ethernet/GiGE (default), Myrinet, Infiniband |
Linux | Ethernet/GiGE (default), Myrinet, Infiniband |
Ethernet/GiGE is the default interconnect on all platforms. You can choose one of the alternate interconnects by setting the ANSOFT_MPI_INTERCONNECT environment variable to "myri" for Myrinet and "ib" for Infiniband.
Interconnect variants are supported on Linux. Set the ANSOFT_MPI_INTERCONNECT_VARIANT to the desired interconnect variant. For example, set "ANSOFT_MPI_INTERCONNECT_VARIANT=silverstorm" to use the silverstorm variant.
HFSS 学习培训课程套装,专家讲解,视频教学,帮助您全面系统地学习掌握HFSS
上一篇:Introduction to Causality Issues for Simulations
下一篇:Introduction to IronPython