You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the previous lines created the local prtions we have to trigger sync
109
124
between the workers.
110
125
```julia
111
-
assemble!(A)
126
+
assemble!(A)
112
127
```
113
128
114
129
Construct the right hand side. Note that the first entry of the rhs of worker 2
115
130
is shared with worker 1.
116
131
```julia
117
-
b =PVector{Float64}(undef, A.rows)
118
-
map_parts(parts,local_view(b, b.rows)) do part, b_local
119
-
if part ==1
120
-
b_local .= [1.0, -1.0, 0.0]
121
-
else
122
-
b_local .= [0.0, 0.0, 0.0]
123
-
end
132
+
b =PVector{Float64}(undef, A.rows)
133
+
map_parts(parts,local_view(b, b.rows)) do part, b_local
134
+
if part ==1
135
+
b_local .= [1.0, -1.0, 0.0]
136
+
else
137
+
b_local .= [0.0, 0.0, 0.0]
124
138
end
139
+
end
125
140
```
126
141
127
142
Now the sparse matrix and right hand side of the linear system are assembled
128
143
globally and we can solve problem with cg. With the end in the last line we
129
144
close the parallel environment.
130
145
```julia
131
-
u = IterativeSolvers.cg(A,b)
132
-
end
146
+
u = IterativeSolvers.cg(A,b)
133
147
```
134
148
135
149
### Parallel Code
@@ -143,15 +157,15 @@ to
143
157
```julia
144
158
backend =MPIBackend()
145
159
```
146
-
and including MPI. Now launching the script with MPI makes the run parallel
160
+
and including and initializing MPI. Now launching the script with MPI makes the run parallel.
147
161
148
162
```sh
149
163
$ mpirun -n 2 julia my-script.jl
150
164
```
151
165
152
-
Hence the full MPI code looks like this
166
+
Hence the full MPI code is given in the next code box. Note that we have used the `prun` function that automatically includes and initializes MPI for us.
153
167
```julia
154
-
using PartitionedArrays, SparseArrays, IterativeSolvers, MPI
168
+
using PartitionedArrays, SparseArrays, IterativeSolvers
155
169
156
170
np =2
157
171
backend =MPIBackend()
@@ -160,9 +174,17 @@ prun(backend,np) do parts
160
174
# Construct the partitioning
161
175
neighbors, row_partitioning, col_partitioning =map_parts(parts) do part
0 commit comments