Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 47 additions & 3 deletions cpp/dolfinx/fem/assembler.h
Original file line number Diff line number Diff line change
Expand Up @@ -575,6 +575,29 @@ void set_diagonal(auto set_fn, std::span<const std::int32_t> rows,
}
}

/// @brief Sets values to the diagonal of a matrix for specified rows.
///
/// This function is typically called after assembly. The assembly
/// function zeroes Dirichlet rows and columns. For block matrices, this
/// function should normally be called only on the diagonal blocks, i.e.
/// blocks for which the test and trial spaces are the same.
///
/// @param[in] set_fn The function for setting values to a matrix.
/// @param[in] rows Row blocks, in local indices, for which to add a
/// value to the diagonal.
/// @param[in] diagonal Values to add to the diagonal for the specified
/// rows.
template <dolfinx::scalar T>
void set_diagonal(auto set_fn, std::span<const std::int32_t> rows,
std::span<const T> diagonal)
{
assert(diagonal.size() == rows.size());
for (std::size_t i = 0; i < rows.size(); ++i)
{
set_fn(rows.subspan(i, 1), rows.subspan(i, 1), diagonal.subspan(i, 1));
}
}

/// @brief Sets a value to the diagonal of the matrix for rows with a
/// Dirichlet boundary conditions applied.
///
Expand All @@ -587,22 +610,43 @@ void set_diagonal(auto set_fn, std::span<const std::int32_t> rows,
/// @param[in] set_fn The function for setting values to a matrix.
/// @param[in] V The function space for the rows and columns of the
/// matrix. It is used to extract only the Dirichlet boundary conditions
/// that are define on V or subspaces of V.
/// that are defined on V or subspaces of V.
/// @param[in] bcs The Dirichlet boundary conditions.
/// @param[in] diagonal Value to add to the diagonal for rows with a
/// boundary condition applied.
/// @param[in] unassembled Whether the matrix is in unassembled format
template <dolfinx::scalar T, std::floating_point U>
void set_diagonal(
auto set_fn, const FunctionSpace<U>& V,
const std::vector<std::reference_wrapper<const DirichletBC<T, U>>>& bcs,
T diagonal = 1.0)
T diagonal = 1.0, bool unassembled = false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks problematic - quite a few types can be cast to T and bool, very easy for a user to make an error.

Probably better to make unassembled an enum. Easier to read too.

Copy link
Author

@stefanozampini stefanozampini Aug 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having an enum for a single function seems overkill to me.
If there's a way to extract back the Mat from set_fn, we could test if it is a MATIS inside the function, and we won't need the extra optional argument. @garth-wells what do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the branch I made, I use an insertion mode with add or insert to distinguish between the two, which resonates well with the scatter_fwd/scatter_reverse modes we have in the Python-interface for vectors.

Copy link
Author

@stefanozampini stefanozampini Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at your branch. I don't think it is correct for MATAIJ. set_diagonal uses (correctly) set_fn = dolfinx::la::petsc::Matrix::set_fn(A, INSERT_VALUES), and the values won't be accumulated into MATAIJ . I think it is best to use an optional boolean that clearly indicates we want to insert into an unassembled (MATIS) matrix. All these problems appear because the interface for set_diagonal only takes set_fn and not the matrix directly

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, use add_values formalism everywhere in matrix assembly (above all this is FEM :-)). The optimization you get by not communicating the values to be inserted into the diagonal is minimal @garth-wells ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at your branch. I don't think it is correct for MATAIJ. set_diagonal uses (correctly) set_fn = dolfinx::la::petsc::Matrix::set_fn(A, INSERT_VALUES), and the values won't be accumulated into MATAIJ . I think it is best to use an optional boolean that clearly indicates we want to insert into an unassembled (MATIS) matrix. All these problems appear because the interface for set_diagonal only takes set_fn and not the matrix directly

Right, but if we modify the input to the set_fn to then use ADD_VALUES it would resolve the issue, right?
i.e.

        PetscBool flg;
        PetscObjectTypeCompare((PetscObject)A, MATIS, &flg);
        dolfinx::la::VectorInsertMode mode
            = flg ? dolfinx::la::VectorInsertMode::add
                  : dolfinx::la::VectorInsertMode::insert;
        InsertMode petsc_mode = flg ? ADD_VALUES : INSERT_VALUES;
        dolfinx::fem::set_diagonal(
            dolfinx::la::petsc::Matrix::set_fn(A, petsc_mode), V, _bcs,
            diagonal);
            diagonal, mode);
      },

Copy link
Member

@jorgensd jorgensd Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking more about it, I kind of agree that we should only have one approach, which preferably is the add-mode, as the approach of assuming that two independent input arguments are set correctly by the user is likely to lead to a lot of confusion.

Having only a single, accumulate data from all processes as 1/num_shared_procs seems sensible.

The reason for DOLFINx not interfacing directly with the PETSc matrix, is that it allows us to have a single code path for multiple matrix types, like our own, built in matrices, and potentially TRILLINOS, without having to change the set_diagonal, dolfinx::fem::assemble_* functions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with any solution you think is reasonable. Note that if you switch back to add values, you won't need to flush assembly anymore in the fem callbacks. And you can still use the optimization of adding only to diagonal entries in AIJ format (since the value in the diagonal before the call is 0.0, so add or insert is the same)

Right, but if we modify the input to the set_fn to then use ADD_VALUES it would resolve the issue, right?
i.e

If you want equivalent behaviour, it should be the opposite way, i.e. petsc_mode = !flg ? ADD_VALUES : INSERT_VALUES. This is because MATIS inserts directly into the local matrix. matrix-vector products in MATIS are done via R^t A R, where R restricts from global to local, A does the local multiplications, and R^T sums up the local contributions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I got that the wrong way around.

{
for (auto& bc : bcs)
{
if (V.contains(*bc.get().function_space()))
{
const auto [dofs, range] = bc.get().dof_indices();
set_diagonal(set_fn, dofs.first(range), diagonal);
if (unassembled)
{
// We need to insert on ghost indices too; since MATIS is not
// designed to support the INSERT_VALUES stage, we scale ALL
// the diagonal values we insert into the matrix
// TODO: move scaling factor computation to DirichletBC or IndexMap?
auto adjlist = bc.get().function_space()->dofmap()->index_map->index_to_dest_ranks();
const auto off = adjlist.offsets();
std::vector<std::int32_t> number_of_sharing_ranks(off.size());
for (size_t i = 0; i < off.size() - 1; ++i)
number_of_sharing_ranks[i] = off[i+1] - off[i] + 1;
Comment on lines +635 to +639
Copy link
Author

@stefanozampini stefanozampini Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, this setup code should be moved to IndexMap
I was thinking of implementing it lazily, with a mutable class member, but I see you don't use mutable anywhere in dolfinx. Would it be something you would accept?

Copy link
Member

@jorgensd jorgensd Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Stefano,
I know I suggested this function on discourse. However, as I now see the use-case, what I think we should do is to

  1. create an la::Vector on the index map.
  2. Fill vector with 1 (including ghosts)
  3. scatter reverse add
  4. Scatter forward

that would give you an array of the number of shared procs per dof

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could probably be a vector we compute once inside the DirichletBC class. What do you think @garth-wells ?

Copy link
Author

@stefanozampini stefanozampini Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jorgensd Can you paste some code here to do it? I will wait for @garth-wells confirmation and move the setup of the counting vector in the IndexMap or DirichletBC class, your call

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m without a computer until Sunday. @IgorBaratta could you do me a solid and make a simple example for Stefano?:)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a way to divide the insertion of 1 on the diagonal by the number of processes that has the dof, as MATIS cannot use insert, and has to add the values. I.e. if a DirichletBC dof is shared between N processes, we assign 1/N on each process.

I suggest we use a vector, stored in the DirichletBC to keep track of this, a illustrated with the snippet above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jorgensd summarized it properly. If you want to keep the same workflow for assembled and unassembled matrices, the counting vector is the only way to realize it. I think such information should belong to IndexMap instead of DirichletBC, but I'm fine either way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stefanozampini @garth-wells I've made an attempt at this at: https://github.com/FEniCS/dolfinx/tree/dokken/suggested-matis
I'll have a go at installing PETSc main locally and run this, to see if I can get the demo to run:)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A likely good approach is consistent handling of the LHS and RHS, i.e. use accumulate rather than set for the bc on the RHS. We did this is legacy DOLFIN. I'd need to look closely at the code for how this might work with the latest design.

Copy link
Member

@jorgensd jorgensd Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be quite straightforward with the approach suggested in my branch above, as it lets the user choose the insertion mode of the diagonal (which can trivially be extended to set_bc), as the DirichletBC itself holds the information about the count.

std::vector<T> data(dofs.size());
for (size_t i = 0; i < dofs.size(); ++i)
data[i] = diagonal / T(number_of_sharing_ranks[dofs[i]]);
std::span<const T> data_span = std::span(data);
set_diagonal(set_fn, dofs, data_span);
}
else
{
set_diagonal(set_fn, dofs.first(range), diagonal);
}
}
}
}
Expand Down
28 changes: 10 additions & 18 deletions cpp/dolfinx/fem/petsc.h
Original file line number Diff line number Diff line change
Expand Up @@ -122,12 +122,6 @@ Mat create_matrix_block(
la::SparsityPattern pattern(mesh->comm(), p, maps, bs_dofs);
pattern.finalize();

// FIXME: Add option to pass customised local-to-global map to PETSc
// Mat constructor

// Initialise matrix
Mat A = la::petsc::create_matrix(mesh->comm(), pattern, type);

// Create row and column local-to-global maps (field0, field1, field2,
// etc), i.e. ghosts of field0 appear before owned indices of field1
std::array<std::vector<PetscInt>, 2> _maps;
Expand Down Expand Up @@ -161,29 +155,27 @@ Mat create_matrix_block(
}

// Create PETSc local-to-global map/index sets and attach to matrix
ISLocalToGlobalMapping petsc_local_to_global0;
ISLocalToGlobalMappingCreate(MPI_COMM_SELF, 1, _maps[0].size(),
ISLocalToGlobalMapping petsc_local_to_global0, petsc_local_to_global1;
ISLocalToGlobalMappingCreate(mesh->comm(), 1, _maps[0].size(),
_maps[0].data(), PETSC_COPY_VALUES,
&petsc_local_to_global0);
if (V[0] == V[1])
{
MatSetLocalToGlobalMapping(A, petsc_local_to_global0,
petsc_local_to_global0);
ISLocalToGlobalMappingDestroy(&petsc_local_to_global0);
PetscObjectReference((PetscObject)petsc_local_to_global0);
petsc_local_to_global1 = petsc_local_to_global0;
}
else
{

ISLocalToGlobalMapping petsc_local_to_global1;
ISLocalToGlobalMappingCreate(MPI_COMM_SELF, 1, _maps[1].size(),
ISLocalToGlobalMappingCreate(mesh->comm(), 1, _maps[1].size(),
_maps[1].data(), PETSC_COPY_VALUES,
&petsc_local_to_global1);
MatSetLocalToGlobalMapping(A, petsc_local_to_global0,
petsc_local_to_global1);
ISLocalToGlobalMappingDestroy(&petsc_local_to_global0);
ISLocalToGlobalMappingDestroy(&petsc_local_to_global1);
}

// Initialise matrix
Mat A = la::petsc::create_matrix(mesh->comm(), pattern, type, petsc_local_to_global0, petsc_local_to_global1);
ISLocalToGlobalMappingDestroy(&petsc_local_to_global0);
ISLocalToGlobalMappingDestroy(&petsc_local_to_global1);

return A;
}

Expand Down
155 changes: 114 additions & 41 deletions cpp/dolfinx/la/petsc.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ using namespace dolfinx::la;
} while (0)

//-----------------------------------------------------------------------------
void la::petsc::error(int error_code, const std::string& filename,
void la::petsc::error(PetscErrorCode error_code, const std::string& filename,
const std::string& petsc_function)
{
// Fetch PETSc error description
Expand All @@ -42,7 +42,7 @@ void la::petsc::error(int error_code, const std::string& filename,
// Log detailed error info
spdlog::info("PETSc error in '{}', '{}'", filename.c_str(),
petsc_function.c_str());
spdlog::info("PETSc error code '{}' '{}'", error_code, desc);
spdlog::info("PETSc error code '{}' '{}'", (int)error_code, desc);
throw std::runtime_error("Failed to successfully call PETSc function '"
+ petsc_function + "'. PETSc error code is: "
+ std ::to_string(error_code) + ", "
Expand Down Expand Up @@ -147,6 +147,52 @@ std::vector<IS> la::petsc::create_index_sets(
return is;
}
//-----------------------------------------------------------------------------
std::vector<IS> la::petsc::create_global_index_sets(
const std::vector<
std::pair<std::reference_wrapper<const common::IndexMap>, int>>& maps)
{
std::vector<IS> is;

std::int64_t offset = 0;
std::int64_t merged_local_size = 0;
MPI_Comm comm = MPI_COMM_NULL;

for (auto& map : maps)
{
if (comm == MPI_COMM_NULL)
{
comm = map.first.get().comm();
}
int result;
MPI_Comm_compare(comm, map.first.get().comm(), &result);
if (result != MPI_IDENT && result != MPI_CONGRUENT)
{
throw std::runtime_error("Not supported on the different communicators.");
}
int bs = map.second;
std::int32_t size = map.first.get().size_local();
merged_local_size += size * bs;
}
if (comm == MPI_COMM_NULL)
return is;

int ierr = MPI_Exscan(&merged_local_size, &offset, 1, MPI_INT64_T,
MPI_SUM, comm);
dolfinx::MPI::check_error(comm, ierr);

for (auto& map : maps)
{
int bs = map.second;
std::int32_t size = map.first.get().size_local();
IS _is;
ISCreateStride(map.first.get().comm(), bs * size, offset, 1, &_is);
is.push_back(_is);
offset += bs * size;
}

return is;
}
//-----------------------------------------------------------------------------
std::vector<std::vector<PetscScalar>> la::petsc::get_local_vectors(
const Vec x,
const std::vector<
Expand Down Expand Up @@ -232,7 +278,9 @@ void la::petsc::scatter_local_vectors(
}
//-----------------------------------------------------------------------------
Mat la::petsc::create_matrix(MPI_Comm comm, const SparsityPattern& sp,
std::optional<std::string> type)
std::optional<std::string> type,
std::optional<ISLocalToGlobalMapping> rlgmap,
std::optional<ISLocalToGlobalMapping> clgmap)
{
PetscErrorCode ierr;
Mat A;
Expand All @@ -245,7 +293,20 @@ Mat la::petsc::create_matrix(MPI_Comm comm, const SparsityPattern& sp,
const std::array bs = {sp.block_size(0), sp.block_size(1)};

if (type)
MatSetType(A, type->c_str());
{
if (type == std::string("mpi"))
{
ierr = MatSetType(A, MATAIJ);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetType");
}
else
{
ierr = MatSetType(A, type->c_str());
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetType");
}
}

// Get global and local dimensions
const std::int64_t M = bs[0] * maps[0]->size_global();
Expand Down Expand Up @@ -289,63 +350,75 @@ Mat la::petsc::create_matrix(MPI_Comm comm, const SparsityPattern& sp,
_nnz_offdiag[i] = bs[1] * sp.nnz_off_diag(i / bs[0]);
}

// Allocate space for matrix
ierr = MatXAIJSetPreallocation(A, _bs, _nnz_diag.data(), _nnz_offdiag.data(),
nullptr, nullptr);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatXIJSetPreallocation");

// Set block sizes
ierr = MatSetBlockSizes(A, bs[0], bs[1]);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetBlockSizes");

// Create PETSc local-to-global map/index sets
ISLocalToGlobalMapping local_to_global0;
const std::vector map0 = maps[0]->global_indices();
const std::vector<PetscInt> _map0(map0.begin(), map0.end());
ierr = ISLocalToGlobalMappingCreate(MPI_COMM_SELF, bs[0], _map0.size(),
_map0.data(), PETSC_COPY_VALUES,
&local_to_global0);

if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingCreate");
ISLocalToGlobalMapping local_to_global0, local_to_global1 = NULL;
if (rlgmap)
{
ierr = PetscObjectReference((PetscObject)rlgmap.value());
if (ierr != 0)
petsc::error(ierr, __FILE__, "PetscObjectReference");
local_to_global0 = rlgmap.value();
}
else
{
const std::vector map0 = maps[0]->global_indices();
const std::vector<PetscInt> _map0(map0.begin(), map0.end());
ierr = ISLocalToGlobalMappingCreate(comm, bs[0], _map0.size(),
_map0.data(), PETSC_COPY_VALUES,
&local_to_global0);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingCreate");
}

// Check for common index maps
if (maps[0] == maps[1] and bs[0] == bs[1])
if (maps[0] == maps[1] and bs[0] == bs[1] and !clgmap)
{
ierr = MatSetLocalToGlobalMapping(A, local_to_global0, local_to_global0);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetLocalToGlobalMapping");
}
else
{
ISLocalToGlobalMapping local_to_global1;
const std::vector map1 = maps[1]->global_indices();
const std::vector<PetscInt> _map1(map1.begin(), map1.end());
ierr = ISLocalToGlobalMappingCreate(MPI_COMM_SELF, bs[1], _map1.size(),
_map1.data(), PETSC_COPY_VALUES,
&local_to_global1);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingCreate");
if (clgmap)
{
ierr = PetscObjectReference((PetscObject)clgmap.value());
if (ierr != 0)
petsc::error(ierr, __FILE__, "PetscObjectReference");
local_to_global1 = clgmap.value();
}
else
{
const std::vector map1 = maps[1]->global_indices();
const std::vector<PetscInt> _map1(map1.begin(), map1.end());
ierr = ISLocalToGlobalMappingCreate(comm, bs[1], _map1.size(),
_map1.data(), PETSC_COPY_VALUES,
&local_to_global1);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingCreate");
}
ierr = MatSetLocalToGlobalMapping(A, local_to_global0, local_to_global1);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetLocalToGlobalMapping");
ierr = ISLocalToGlobalMappingDestroy(&local_to_global1);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingDestroy");
}

// Clean up local-to-global 0
ierr = ISLocalToGlobalMappingDestroy(&local_to_global0);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingDestroy");
ierr = ISLocalToGlobalMappingDestroy(&local_to_global1);
if (ierr != 0)
petsc::error(ierr, __FILE__, "ISLocalToGlobalMappingDestroy");

// Note: This should be called after having set the local-to-global
// map for MATIS (this is a dummy call if A is not of type MATIS)
// ierr = MatISSetPreallocation(A, 0, _nnz_diag.data(), 0,
// _nnz_offdiag.data()); if (ierr != 0)
// error(ierr, __FILE__, "MatISSetPreallocation");
// Allocate space for matrix
ierr = MatXAIJSetPreallocation(A, _bs, _nnz_diag.data(), _nnz_offdiag.data(),
nullptr, nullptr);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatXIJSetPreallocation");

// Set block sizes
ierr = MatSetBlockSizes(A, bs[0], bs[1]);
if (ierr != 0)
petsc::error(ierr, __FILE__, "MatSetBlockSizes");

// Set some options on Mat object
ierr = MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_TRUE);
Expand Down
22 changes: 19 additions & 3 deletions cpp/dolfinx/la/petsc.h
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ class SparsityPattern;
namespace petsc
{
/// Print error message for PETSc calls that return an error
void error(int error_code, const std::string& filename,
void error(PetscErrorCode error_code, const std::string& filename,
const std::string& petsc_function);

/// Create PETsc vectors from the local data. The data is copied into
Expand Down Expand Up @@ -100,11 +100,25 @@ Vec create_vector_wrap(const la::Vector<V>& x)
/// @note The caller is responsible for destruction of each IS.
///
/// @param[in] maps Vector of IndexMaps and corresponding block sizes
/// @return Vector of PETSc Index Sets, created on` PETSC_COMM_SELF`
/// @return Vector of PETSc Index Sets, created on `PETSC_COMM_SELF`
std::vector<IS> create_index_sets(
const std::vector<
std::pair<std::reference_wrapper<const common::IndexMap>, int>>& maps);

/// @brief Compute PETSc IndexSets (IS) for a stack of index maps.
///
/// This function stacks the owned part of the maps and returns
/// indices in the global space. The maps must have the same communicator.
///
/// @note Collective
/// @note The caller is responsible for destruction of each IS.
///
/// @param[in] maps Vector of IndexMaps and corresponding block sizes
/// @return Vector of PETSc Index Sets, created on the index map communicators
std::vector<IS> create_global_index_sets(
const std::vector<
std::pair<std::reference_wrapper<const common::IndexMap>, int>>& maps);

/// Copy blocks from Vec into local arrays
std::vector<std::vector<PetscScalar>> get_local_vectors(
const Vec x,
Expand All @@ -120,7 +134,9 @@ void scatter_local_vectors(
/// Create a PETSc Mat. Caller is responsible for destroying the
/// returned object.
Mat create_matrix(MPI_Comm comm, const SparsityPattern& sp,
std::optional<std::string> type = std::nullopt);
std::optional<std::string> type = std::nullopt,
std::optional<ISLocalToGlobalMapping> rlgmap = std::nullopt,
std::optional<ISLocalToGlobalMapping> clgmap = std::nullopt);

/// Create PETSc MatNullSpace. Caller is responsible for destruction
/// returned object.
Expand Down
Loading
Loading