forked from pmodels/mpich
-
Notifications
You must be signed in to change notification settings - Fork 1
/
RELEASE_NOTES
118 lines (84 loc) · 4.43 KB
/
RELEASE_NOTES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
----------------------------------------------------------------------
KNOWN ISSUES
----------------------------------------------------------------------
### CH4
* Build Issues
- CH4 will not build with Solaris compilers.
* Stability with large buffers
- CH4 may suffer from stability issues when operating on large
communication buffers (>= 2GB)
* Dynamic process support
- Requires fi_tagged capability in OFI netmod
- Not supported in UCX netmod at this time
* Test failures
- CH4 will not currently pass 100% of the testsuite.
- Test failures differ based on experimental setup and network
support built in.
### OS/X
* C++ bindings - Exception handling in C++ bindings is broken because
of a bug in libc++ (see
https://github.com/android-ndk/ndk/issues/289). You can workaround
this issue by explicitly linking your application to libstdc++
instead (e.g., mpicxx -o foo foo.cxx -lstdc++).
* GCC builds - When using Homebrew or Macports gcc (not the default
gcc, which is just a symlink to clang), a bug in gcc
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838) causes stack
alignment issues in some cases causing segmentation faults. You
can workaround this issue by building mpich with clang, or by
passing a lower optimization level than the default (-O2).
Currently, this bug only seems to show up when the
--disable-error-checking flag is passed to configure.
* Intel Compilers - MPICH will fail to build with ICC 18.0.0 on OS/X.
### Solaris compilers
* Compilation on solaris will freeze when MPICH is configured with
"--enable-strict". This is due to a solaris compiler bug related
to a specific order of assignments to long long and short variables.
See the following ticket for more information:
https://trac.mpich.org/projects/mpich/ticket/2105
### PathScale compilers
* Due to bugs in the PathScale compiler suite, some configurations of MPICH
do not build correctly.
- v5.0.1: When the --disable-shared configure option is passed to MPICH,
applications will give a segfault.
- v5.0.5: Unless you pass the --enable-fast=O0 configure flag to MPICH,
applications will hang.
See the following ticket for more information:
https://trac.mpich.org/projects/mpich/ticket/2104
### Fine-grained thread safety
* ch3:sock does not (and will not) support fine-grained threading.
* MPI-IO APIs are not currently thread-safe when using fine-grained
threading (--enable-thread-cs=per-object).
* ch3:nemesis:tcp fine-grained threading is still experimental and may
have correctness or performance issues. Known correctness issues
include dynamic process support and generalized request support.
### Lacking channel-specific features
* ch3 does not presently support communication across heterogeneous
platforms (e.g., a big-endian machine communicating with a
little-endian machine).
* ch3:nemesis:mx does not support dynamic processes at this time.
* Support for "external32" data representation is incomplete. This
affects the MPI_Pack_external and MPI_Unpack_external routines, as
well the external data representation capabilities of ROMIO. In
particular: noncontiguous user buffers could consume egregious
amounts of memory in the MPI library and any types which vary in
width between the native representation and the external32
representation will likely cause corruption. The following ticket
contains some additional information:
http://trac.mpich.org/projects/mpich/ticket/1754
* ch3 has known problems in some cases when threading and dynamic
processes are used together on communicators of size greater than
one.
### Process Managers
* Hydra has a bug related to stdin handling:
https://trac.mpich.org/projects/mpich/ticket/1782
### Performance issues
* SMP-aware collectives do not perform as well, in select cases, as
non-SMP-aware collectives, e.g. MPI_Reduce with message sizes
larger than 64KiB. These can be disabled by setting the environment
variable MPIR_CVAR_ENABLE_SMP_COLLECTIVES to 0.
* MPI_Irecv operations that are not explicitly completed before
MPI_Finalize is called may fail to complete before MPI_Finalize
returns, and thus never complete. Furthermore, any matching send
operations may erroneously fail. By explicitly completed, we mean
that the request associated with the operation is completed by one
of the MPI_Test or MPI_Wait routines.