Skip to content

Commit

Permalink
Merge pull request #1348 from ginkgo-project/fix_typos
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
greole authored Jul 24, 2023
2 parents 15a7e28 + 9f05f6f commit 24223b4
Show file tree
Hide file tree
Showing 92 changed files with 240 additions and 160 deletions.
23 changes: 23 additions & 0 deletions .github/_typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[files]
extend-exclude = ["third_party/*", "*.svg"]

[default.extend-words]
dout = "dout"
nd = "nd"
tht = "tht"
automatical = "automatical"
strat = "strat"
entrie = "entrie"
agregate = "agregate" # since that script name is already in ginkgo-data repo

[default.extend-identifiers]
set_complex_subpsace = "set_complex_subpsace" # remove when deprecated function is gone
HSA_HEADER = "HSA_HEADER"
conj_operaton = "conj_operaton" # considered interface break in range.hpp
imag_operaton = "imag_operaton" # considered interface break in range.hpp
real_operaton = "real_operaton" # considered interface break in range.hpp
one_operaton = "one_operaton" # considered interface break in range.hpp
abs_operaton = "abs_operaton" # considered interface break in range.hpp
max_operaton = "max_operaton" # considered interface break in range.hpp
min_operaton = "min_operaton" # considered interface break in range.hpp
squared_norm_operaton = "squared_norm_operaton" # considered interface break in range.hpp
16 changes: 16 additions & 0 deletions .github/workflows/spell_check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Test GitHub Action
on:
pull_request:
types: [opened, synchronize]

jobs:
run:
name: Spell Check with Typos
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check for typos
uses: crate-ci/typos@master
with:
config: .github/_typos.toml

14 changes: 7 additions & 7 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ Supported systems and requirements:
+ Add reduce_add for arrays ([#831](https://github.com/ginkgo-project/ginkgo/pull/831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](https://github.com/ginkgo-project/ginkgo/pull/1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](https://github.com/ginkgo-project/ginkgo/pull/1123))
+ Make IDR random initilization deterministic ([#1116](https://github.com/ginkgo-project/ginkgo/pull/1116))
+ Make IDR random initialization deterministic ([#1116](https://github.com/ginkgo-project/ginkgo/pull/1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](https://github.com/ginkgo-project/ginkgo/pull/1088))
+ Update CUDA archCoresPerSM ([#1175](https://github.com/ginkgo-project/ginkgo/pull/1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](https://github.com/ginkgo-project/ginkgo/pull/994))
Expand Down Expand Up @@ -620,7 +620,7 @@ page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


### Additions
+ Upper and lower triangular solvers ([#327](https://github.com/ginkgo-project/ginkgo/issues/327), [#336](https://github.com/ginkgo-project/ginkgo/issues/336), [#341](https://github.com/ginkgo-project/ginkgo/issues/341), [#342](https://github.com/ginkgo-project/ginkgo/issues/342))
+ Upper and lower triangular solvers ([#327](https://github.com/ginkgo-project/ginkgo/issues/327), [#336](https://github.com/ginkgo-project/ginkgo/issues/336), [#341](https://github.com/ginkgo-project/ginkgo/issues/341), [#342](https://github.com/ginkgo-project/ginkgo/issues/342))
+ New factorization support in Ginkgo, and addition of the ParILU
algorithm ([#305](https://github.com/ginkgo-project/ginkgo/issues/305), [#315](https://github.com/ginkgo-project/ginkgo/issues/315), [#319](https://github.com/ginkgo-project/ginkgo/issues/319), [#324](https://github.com/ginkgo-project/ginkgo/issues/324))
+ New ILU preconditioner ([#348](https://github.com/ginkgo-project/ginkgo/issues/348), [#353](https://github.com/ginkgo-project/ginkgo/issues/353))
Expand All @@ -632,7 +632,7 @@ page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).
+ Allow benchmarking CuSPARSE spmv formats through Ginkgo's benchmarks ([#303](https://github.com/ginkgo-project/ginkgo/issues/303))
+ New benchmark for sparse matrix format conversions ([#312](https://github.com/ginkgo-project/ginkgo/issues/312)[#317](https://github.com/ginkgo-project/ginkgo/issues/317))
+ Add conversions between CSR and Hybrid formats ([#302](https://github.com/ginkgo-project/ginkgo/issues/302), [#310](https://github.com/ginkgo-project/ginkgo/issues/310))
+ Support for sorting rows in the CSR format by column idices ([#322](https://github.com/ginkgo-project/ginkgo/issues/322))
+ Support for sorting rows in the CSR format by column indices ([#322](https://github.com/ginkgo-project/ginkgo/issues/322))
+ Addition of a CUDA COO SpMM kernel for improved performance ([#345](https://github.com/ginkgo-project/ginkgo/issues/345))
+ Addition of a LinOp to handle perturbations of the form (identity + scalar *
basis * projector) ([#334](https://github.com/ginkgo-project/ginkgo/issues/334))
Expand Down Expand Up @@ -847,7 +847,7 @@ Ginkgo 1.0.0 is brought to you by:

**Karlsruhe Institute of Technology**, Germany
**Universitat Jaume I**, Spain
**University of Tennessee, Knoxville**, US
**University of Tennessee, Knoxville**, US

These universities, along with various project grants, supported the development team and provided resources needed for the development of Ginkgo.

Expand All @@ -859,7 +859,7 @@ Ginkgo 1.0.0 contains contributions from:
**Goran Flegar**, Universitat Jaume I
**Fritz Göbel**, Karlsruhe Institute of Technology
**Thomas Grützmacher**, Karlsruhe Institute of Technology
**Pratik Nayak**, Karlsruhe Institue of Technologgy
**Pratik Nayak**, Karlsruhe Institute of Technology
**Tobias Ribizel**, Karlsruhe Institute of Technology
**Yuhsiang Tsai**, National Taiwan University

Expand All @@ -869,11 +869,11 @@ Supporting materials are provided by the following individuals:
**Frithjof Fleischhammer** - the Ginkgo website

The development team is grateful to the following individuals for discussions and comments:

**Erik Boman**
**Jelena Držaić**
**Mike Heroux**
**Mark Hoemmen**
**Timo Heister**
**Timo Heister**
**Jens Saak**

2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ endif()
set(GINKGO_CUDA_COMPILER_FLAGS "" CACHE STRING
"Set the required NVCC compiler flags, mainly used for warnings. Current default is an empty string")
set(GINKGO_CUDA_ARCHITECTURES "Auto" CACHE STRING
"A list of target NVIDIA GPU achitectures. See README.md for more detail.")
"A list of target NVIDIA GPU architectures. See README.md for more detail.")
option(GINKGO_CUDA_DEFAULT_HOST_COMPILER "Tell Ginkgo to not automatically set the CUDA host compiler" OFF)
# the details of fine/coarse grain memory and unsafe atomic are available https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html#floating-point-fp-atomic-operations-and-coarse-fine-grained-memory-allocations
option(GINKGO_HIP_AMD_UNSAFE_ATOMIC "Compiler uses unsafe floating point atomic (only for AMD GPU and ROCM >= 5). Default is ON because we use hipMalloc, which is always on coarse grain. Must turn off when allocating memory on fine grain" ON)
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ Thus, contributors should be aware of the following rules for blank lines:
However, simply calling function `f` from function `g` does not imply
that `f` and `g` are "related".
2. Statements within structures / classes are separated with 1 blank line.
There are no blank lines betweeen the first / last statement in the
There are no blank lines between the first / last statement in the
structure / class.
1. _exception_: there is no blank line between an access modifier (`private`, `protected`, `public`) and the following statement.
_example_:
Expand Down
2 changes: 1 addition & 1 deletion accessor/accessor_helper.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ struct row_major_helper_s {
const std::array<SizeType, (total_dim > 1 ? total_dim - 1 : 0)>& stride,
IndexType first, Indices&&... idxs)
{
// The ASSERT size check must NOT be indexed with `dim_idx` directy,
// The ASSERT size check must NOT be indexed with `dim_idx` directly,
// otherwise, it leads to a linker error. The reason is likely that
// `std::array<size_type, N>::operator[](const size_type &)` uses a
// reference. Since `dim_idx` is constexpr (and not defined in a
Expand Down
2 changes: 1 addition & 1 deletion accessor/row_major.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ namespace acc {
* constructor parameters for this class to the range (it will forward it to
* this class).
*
* @warning For backward compatability reasons, a specialization is provided
* @warning For backward compatibility reasons, a specialization is provided
* for dimensionality == 2.
*
* @tparam ValueType type of values this accessor returns
Expand Down
2 changes: 1 addition & 1 deletion accessor/utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ to_arithmetic_type(const Ref& ref)
* @internal
* Struct used for testing if an implicit cast is present. The constructor only
* takes an OutType, so any argument of a type that is not implicitly
* convertable to OutType is incompatible.
* convertible to OutType is incompatible.
*/
template <typename OutType>
struct test_for_implicit_cast {
Expand Down
4 changes: 2 additions & 2 deletions benchmark/tools/mtx_to_binary.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ void process(const char* input, const char* output, bool validate)
}
}
if (validate) {
std::ifstream ois(output, std::ios_base::in | std::ios_base::binary);
auto data2 = gko::read_binary_raw<ValueType, gko::int64>(ois);
std::ifstream is(output, std::ios_base::in | std::ios_base::binary);
auto data2 = gko::read_binary_raw<ValueType, gko::int64>(is);
std::cerr << "Comparing against previously read data\n";
if (data.size != data2.size) {
throw GKO_STREAM_ERROR("Mismatching sizes!");
Expand Down
4 changes: 2 additions & 2 deletions benchmark/utils/formats.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ std::string format_description =
" Irregular Sparse Matrices.\n"
"csr: Compressed Sparse Row storage. Ginkgo implementation with\n"
" automatic strategy.\n"
"csrc: Ginkgo's CSR implementation with automatic stategy.\n"
"csri: Ginkgo's CSR implementation with inbalance strategy.\n"
"csrc: Ginkgo's CSR implementation with automatic strategy.\n"
"csri: Ginkgo's CSR implementation with imbalance strategy.\n"
"csrm: Ginkgo's CSR implementation with merge_path strategy.\n"
"csrs: Ginkgo's CSR implementation with sparselib strategy.\n"
"ell: Ellpack format according to Bell and Garland: Efficient Sparse\n"
Expand Down
2 changes: 1 addition & 1 deletion benchmark/utils/general.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ void initialize_argument_parsing(int* argc, char** argv[], std::string& header,
}

/**
* Print general benchmark informations using the common available parameters
* Print general benchmark information using the common available parameters
*
* @param extra describes benchmark specific extra parameters to output
*/
Expand Down
2 changes: 1 addition & 1 deletion cmake/CTestScript.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
#
# Runs our tests through CTest, with support for Coverage or memory checking.
#
# This script provides a full CTest run whith result submission to Ginkgo's
# This script provides a full CTest run with result submission to Ginkgo's
# CDash dashboard. The supported runs are:
# + With or without coverage, requires the gcov tool.
# + With or without address sanitizers.
Expand Down
2 changes: 1 addition & 1 deletion cmake/Modules/CudaArchitectureSelector.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@
# identifiers in this list will be removed from the list specified by the
# ``ARCHITECTURES`` list. A warning will be printed for each removed entry.
# The list also supports aggregates ``All``, ``Auto`` and GPU generation names
# wich have the same meaning as in the ``ARCHITECTURES'' specification list.
# which have the same meaning as in the ``ARCHITECTURES'' specification list.


if(NOT DEFINED CMAKE_CUDA_COMPILER)
Expand Down
2 changes: 1 addition & 1 deletion cmake/hip.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ if (GINKGO_HIP_PLATFORM MATCHES "${HIP_PLATFORM_NVIDIA_REGEX}")
# Remove false positive CUDA warnings when calling one<T>() and zero<T>()
list(APPEND GINKGO_HIP_NVCC_ADDITIONAL_FLAGS --expt-relaxed-constexpr --expt-extended-lambda)

if (GINKGO_HIP_PLATFROM MATCHES "${HIP_PLATFORM_NVIDIA_REGEX}"
if (GINKGO_HIP_PLATFORM MATCHES "${HIP_PLATFORM_NVIDIA_REGEX}"
AND CMAKE_CUDA_COMPILER_VERSION MATCHES "9.2"
AND CMAKE_CUDA_HOST_COMPILER MATCHES ".*clang.*" )
ginkgo_extract_clang_version(${CMAKE_CUDA_HOST_COMPILER} GINKGO_CUDA_HOST_CLANG_VERSION)
Expand Down
2 changes: 1 addition & 1 deletion cmake/information_helpers.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ macro(ginkgo_interface_information)
get_target_property(GINKGO_INTERFACE_LINK_LIBRARIES ginkgo INTERFACE_LINK_LIBRARIES)
ginkgo_interface_libraries_recursively("${GINKGO_INTERFACE_LINK_LIBRARIES}")
# Format and store the interface libraries found
# remove duplicates on the reversed list to keep the dependecy in the end of list.
# remove duplicates on the reversed list to keep the dependency in the end of list.
list(REVERSE GINKGO_INTERFACE_LIBS_FOUND)
list(REMOVE_DUPLICATES GINKGO_INTERFACE_LIBS_FOUND)
list(REVERSE GINKGO_INTERFACE_LIBS_FOUND)
Expand Down
2 changes: 1 addition & 1 deletion common/cuda_hip/base/executor.hpp.inc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ inline int convert_sm_ver_to_cores(int major, int minor)
// Defines for GPU Architecture types (using the SM version to determine
// the # of cores per SM
typedef struct {
int SM; // 0xMm (hexidecimal notation), M = SM Major version,
int SM; // 0xMm (hexadecimal notation), M = SM Major version,
// and m = SM minor version
int Cores;
} sSMtoCores;
Expand Down
2 changes: 1 addition & 1 deletion common/cuda_hip/components/segment_scan.hpp.inc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/**
* @internal
*
* Compute a segement scan using add operation (+) of a subwarp. Each segment
* Compute a segment scan using add operation (+) of a subwarp. Each segment
* performs suffix sum. Works on the source array and returns whether the thread
* is the first element of its segment with same `ind`.
*/
Expand Down
2 changes: 1 addition & 1 deletion common/cuda_hip/matrix/csr_kernels.hpp.inc
Original file line number Diff line number Diff line change
Expand Up @@ -606,7 +606,7 @@ __global__ __launch_bounds__(default_block_size) void spgeam(
}
// advance by the number of merged elements
// in theory, we would need to mask by `valid`, but this
// would only be false somwhere in the last iteration, where
// would only be false somewhere in the last iteration, where
// we don't need the value of c_begin afterwards, anyways.
c_begin += popcnt(~prev_equal_mask & lanemask_full);
return true;
Expand Down
4 changes: 2 additions & 2 deletions common/cuda_hip/multigrid/pgm_kernels.hpp.inc
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,9 @@ void sort_row_major(std::shared_ptr<const DefaultExecutor> exec, size_type nnz,
using device_value_type = device_member_type<ValueType>;
auto vals_it = reinterpret_cast<device_value_type*>(vals);
auto it = thrust::make_zip_iterator(thrust::make_tuple(row_idxs, col_idxs));
// Because reduce_by_key is not determinstic, so we do not need
// Because reduce_by_key is not deterministic, so we do not need
// stable_sort_by_key
// TODO: If we have determinstic reduce_by_key, it should be
// TODO: If we have deterministic reduce_by_key, it should be
// stable_sort_by_key
thrust::sort_by_key(thrust_policy(exec), it, it + nnz, vals_it);
}
Expand Down
6 changes: 3 additions & 3 deletions common/unified/multigrid/pgm_kernels.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ void map_row(std::shared_ptr<const DefaultExecutor> exec,
exec,
[] GKO_KERNEL(auto tidx, auto fine_row_ptrs, auto agg, auto row_idxs) {
const auto coarse_row = agg[tidx];
// TODO: when it is neccessary, it can use warp per row to improve.
// TODO: when it is necessary, it can use warp per row to improve.
for (auto i = fine_row_ptrs[tidx]; i < fine_row_ptrs[tidx + 1];
i++) {
row_idxs[i] = coarse_row;
Expand Down Expand Up @@ -232,7 +232,7 @@ void find_strongest_neighbor(
// all neighbor is agg, connect to the strongest agg
// Also, no others will use this item as their
// strongest_neighbor because they are already aggregated. Thus,
// it is determinstic behavior
// it is deterministic behavior
agg[row] = agg[strongest_agg];
} else if (strongest_unagg != -1) {
// set the strongest neighbor in the unagg group
Expand Down Expand Up @@ -260,7 +260,7 @@ void assign_to_exist_agg(std::shared_ptr<const DefaultExecutor> exec,
{
const auto num = agg.get_num_elems();
if (intermediate_agg.get_num_elems() > 0) {
// determinstic kernel
// deterministic kernel
run_kernel(
exec,
[] GKO_KERNEL(auto row, auto row_ptrs, auto col_idxs,
Expand Down
4 changes: 2 additions & 2 deletions core/base/dispatch_helper.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ void run(T, Func, Args...)
* run uses template to go through the list and select the valid
* template and run it.
*
* @tparam K the current type tried in the convertion
* @tparam K the current type tried in the conversion
* @tparam ...Types the other types will be tried in the conversion if K fails
* @tparam T the type of input object
* @tparam Func the function will run if the object can be converted to K
Expand Down Expand Up @@ -108,7 +108,7 @@ void run(T, Func, Args...)
*
* @tparam Base the Base class with one template
* @tparam K the current template type of B. pointer of const Base<K> is tried
* in the convertion.
* in the conversion.
* @tparam ...Types the other types will be tried in the conversion if K fails
* @tparam T the type of input object waiting converted
* @tparam Func the function will run if the object can be converted to pointer
Expand Down
2 changes: 1 addition & 1 deletion core/base/iterator_factory.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ class zip_iterator_reference
template <std::size_t... idxs>
value_type cast_impl(std::index_sequence<idxs...>) const
{
// gcc 5 throws error as using unintialized array
// gcc 5 throws error as using uninitialized array
// std::tuple<int, char> t = { 1, '2' }; is not allowed.
// converting to 'std::tuple<...>' from initializer list would use
// explicit constructor
Expand Down
8 changes: 4 additions & 4 deletions core/base/mtx_io.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ class mtx_io {

/**
* storage modifier hierarchy provides algorithms for handling storage
* modifiers (general, symetric, skew symetric, hermitian) and filling the
* modifiers (general, symmetric, skew symmetric, hermitian) and filling the
* entire matrix from the stored parts
*/
struct storage_modifier {
Expand Down Expand Up @@ -491,7 +491,7 @@ class mtx_io {
* @param os The output stream to write to
* @param data The matrix data to write
* @param entry_writer The entry format to write in.
* @param modifier The strorage modifer
* @param modifier The storage modifier
*/
virtual void write_data(std::ostream& os,
const matrix_data<ValueType, IndexType>& data,
Expand Down Expand Up @@ -554,7 +554,7 @@ class mtx_io {
* @param os The output stream to write to
* @param data The matrix data to write
* @param entry_writer The entry format to write in.
* @param modifier The strorage modifer
* @param modifier The storage modifier
*/
void write_data(std::ostream& os,
const matrix_data<ValueType, IndexType>& data,
Expand Down Expand Up @@ -623,7 +623,7 @@ class mtx_io {
* @param os The output stream to write to
* @param data The matrix data to write
* @param entry_writer The entry format to write in.
* @param modifier The strorage modifer
* @param modifier The storage modifier
*/
void write_data(std::ostream& os,
const matrix_data<ValueType, IndexType>& data,
Expand Down
2 changes: 1 addition & 1 deletion core/base/types.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ constexpr std::enable_if_t<(num_groups > current_shift + 1), int> shift(
*
* The usage will be the following
* Set the method with bits Cfg = ConfigSet<b_0, b_1, ..., b_k>
* Encode the given infomation encoded = Cfg::encode(x_0, x_1, ..., x_k)
* Encode the given information encoded = Cfg::encode(x_0, x_1, ..., x_k)
* Decode the specific position information x_t = Cfg::decode<t>(encoded)
* The encoded result will use 32 bits to record
* rrrrr0..01....1...k..k, which 1/2/.../k means the bits store the information
Expand Down
2 changes: 1 addition & 1 deletion core/solver/gcr.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ void Gcr<ValueType>::apply_dense_impl(const VectorType* dense_b,
size_type restart_iter = 0;

/* Memory movement summary for average iteration with krylov_dim d:
* (4d+22+4/d)n+(d+1+1/d) * values + matrix/preconditioner stroage
* (4d+22+4/d)n+(d+1+1/d) * values + matrix/preconditioner storage
* 1x SpMV: 2n * values + storage
* 1x Preconditioner: 2n * values + storage
* 1x step 1 (scal, axpys) 6n
Expand Down
Loading

0 comments on commit 24223b4

Please sign in to comment.