Skip to content
This repository has been archived by the owner on Feb 10, 2024. It is now read-only.

Commit

Permalink
3.2 release
Browse files Browse the repository at this point in the history
  • Loading branch information
gramian committed May 5, 2021
1 parent 09759b6 commit 794e833
Show file tree
Hide file tree
Showing 7 changed files with 106 additions and 22 deletions.
4 changes: 2 additions & 2 deletions CODE
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# code.ini
name: Hierarchical Approximate Proper Orthogonal Decomposition
shortname: hapod
version: 3.1
release-date: 2020-10-01
version: 3.2
release-date: 2021-05-05
author: Christian Himpe, Stephan Rave
orcid: 0000-0003-2194-6754, 0000-0003-0439-7212
topic: Science, Mathematics, Dimension Reduction
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
BSD 2-Clause License

Copyright (c) 2017--2020, Christian Himpe & Stephan Rave
Copyright (c) 2017--2021, Christian Himpe & Stephan Rave
All rights reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
65 changes: 65 additions & 0 deletions MAPRED.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
function MAPRED()
%%% project: hapod - Hierarchical Approximate POD ( https://git.io/hapod )
%%% version: 3.2 (2021-05-05)
%%% authors: C. Himpe (0000-0003-2194-6754), S. Rave (0000-0003-0439-7212)
%%% license: BSD 2-Clause License (opensource.org/licenses/BSD-2-Clause)
%%% summary: Basic test of (parallel) distributed HAPOD via MapReduce

%% Generate Test Data

randn('seed',1009);
n = 32;
N = n*n;
[a,~,c] = svd(randn(N,N));
b = logspace(0,-16,N)';
S = a*diag(b)*c';

E = sqrt(eps);
w = 0.5;

%% MapReduce Setup

% Define datastore
ds = arrayDatastore(S,'IterationDimension',2,'ReadSize',n);

% Define mapper
function hapod_mapper(data,info,intermKVStore)

[u,~,c] = hapod(data',E,'dist_1',w);
add(intermKVStore,'leaf',{u,c});
end

% Define reducer
function hapod_reducer(intermKey,intermValIter,outKVStore)

u = {};
c = {};

while hasnext(intermValIter)

value = getnext(intermValIter);
u{end+1} = value{1};
c{end+1} = value{2};
end%while

[U,D,C] = hapod(u,E,'dist_r',w,c);
addmulti(outKVStore,{'root_singvec','root_singval','root_info'},{U,D,C});
end

% Apply map and reduce
mr = mapreduce(ds,@hapod_mapper,@hapod_reducer).readall();

singvals = mr.Value{find(strcmp(mr.Key,'root_singval'))};

%% Plot Results

figure;
semilogy(1:numel(singvals),b(1:numel(singvals)),'LineWidth',2);
hold on;
semilogy(1:numel(singvals),singvals,'LineWidth',2,'LineStyle','--');
hold off;
xlim([1,numel(singvals)]);
ylabel('Singular Values');
legend('Exact','Distributed HAPOD','Location','SouthOutside');
end

45 changes: 32 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,28 @@ HAPOD - Hierarchical Approximate Proper Orthogonal Decomposition
================================================================

* HAPOD - Hierarchical Approximate POD
* version: 3.1 (2020-10-01)
* version: 3.2 (2021-05-05)
* by: C. Himpe (0000-0003-2194-6754), S. Rave (0000-0003-0439-7212)
* under: BSD 2-Clause License (opensource.org/licenses/BSD-2-Clause)
* summary: Fast distributed or incremental POD computation.

## About

The HAPOD is an algorithm to compute the POD (left singular vectors, and
singular values of a matrix) hierarchically for (column-wise partitioned)
large-scale matrices, allowing to balance accuracy with performance. As a
POD-of-PODs method, the HAPOD can be parallelized and further accelerated by
user supplied SVD implementations.

## Scope

* Proper Orthogonal Decomposition (POD)
* Singular Value Decomposition (SVD)
* Principal Axis Transformation (PAT)
* Principal Component Analysis (PCA)
* Empirical Orthogonal Functions (EOF)
* Karhunen-Loeve Transformation (KLT)
* Empirical Eigenfunctions (EEF)
* Karhunen-Loeve Transformation (KLT)

## Applications

Expand All @@ -30,7 +38,7 @@ HAPOD - Hierarchical Approximate Proper Orthogonal Decomposition
* Error-driven
* Rigorous bounds
* Single pass (each data vector is needed only once)
* Column-wise data partition
* Column-wise data partitions (inducing parallelizability)
* Custom SVD backends

## Functionality
Expand Down Expand Up @@ -62,7 +70,7 @@ SIAM Journal on Scientific Computing, 40(5): A3267--A3292, 2018.
* `data` {cell} - snapshot data set, partitioned by column (blocks)
* `bound` {scalar} - mean L_2 projection error bound
* `topo` {string} - tree topology (see **Topology**)
* `relax` {scalar} - relaxation parameter in (0,1] (see **Relaxation**)
* `relax` {scalar} - relaxation parameter in (0,1) (see **Relaxation**)
* `depth` {scalar} - total number of levels in tree (only required for `incr_1`)
* `meta` {struct} - meta information structure (see **Meta-Information**)
* `mysvd` {handle} - custom SVD backend (see **Custom SVD**)
Expand All @@ -88,9 +96,9 @@ at the tree's leafs. The following topologies are available:

If all data partitions can be passed as the data argument, the types: `none`
(standard POD), `incr`(emental) HAPOD or `dist`(ributed) HAPOD are applicable.
In case only a single partition is passed, the types: `incr_1` and `dist_1`
should be used for the child nodes of the associated HAPOD tree, while the
types: `incr_r` and `dist_r` should be used for the root nodes. The returned
In case only a single partition is passed at a time, the types: `incr_1` and
`dist_1` should be used for the child nodes of the associated HAPOD tree, while
the types: `incr_r` and `dist_r` should be used for the root nodes. The returned
meta-information structure (or a cell-array thereof) has to be passed to the
parent node in the associated HAPOD tree.

Expand All @@ -105,14 +113,14 @@ computation. The default value is `w = 0.5`.
The `meta` structure contains the following meta-information of the completed
sub-tree:

* `nSnapshots` - Number of data columns passed to this hapod and its children.
* `nModes` - Number of intermediate modes
* `tNode` - Computational time at this hapod's branch
* `nSnapshots` - Number of data columns passed to this HAPOD and its children.
* `nModes` - Number of intermediate modes.
* `tNode` - Computational time at this HAPOD's branch.

The argument `meta` only needs to be passed for topology argument `incr_r`,
`dist_r` and `incr_1` unless it is first leaf. This means especially the user
The argument `meta` only needs to be passed for topology types `incr_r`,
`dist_r` and `incr_1`, unless it is first leaf. Especially, this means the user
never has to create such a structure, since if it is required it is given as a
previous return value.
previous HAPOD's return value.

## Custom SVD

Expand All @@ -139,6 +147,17 @@ RUNME()
which demonstrates the different implemented HAPOD variants and can be used
as a template.

### MapReduce

The distributed HAPOD is well suited for the [MapReduce](https://en.wikipedia.org/wiki/MapReduce)
big data processing model. A basic MapReduce wrapper using the MATLAB (>=2020b)
[mapreduce](https://www.mathworks.com/help/matlab/ref/mapreduce.html) function
is provided by:

```
MAPRED()
```

## Cite As

C. Himpe, T. Leibner and S. Rave:
Expand Down
4 changes: 2 additions & 2 deletions RUNME.m
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
function RUNME()
%%% project: hapod - Hierarchical Approximate POD ( https://git.io/hapod )
%%% version: 3.1 (2020-10-01)
%%% version: 3.2 (2021-05-05)
%%% authors: C. Himpe (0000-0003-2194-6754), S. Rave (0000-0003-0439-7212)
%%% license: BSD 2-Clause License (opensource.org/licenses/BSD-2-Clause)
%%% summary: Basic tests for incremental HAPOD and distributed HAPOD

%% Generate test data
%% Generate Test Data

randn('seed',1009); % seed random number generator
n = 32; % set number of partitions
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
3.1
3.2

6 changes: 3 additions & 3 deletions hapod.m
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
function [svec,sval,meta] = hapod(data,bound,topo,relax,meta,depth,mysvd)
%%% project: hapod - Hierarchical Approximate POD ( https://git.io/hapod )
%%% version: 3.1 (2020-10-01)
%%% version: 3.2 (2021-05-05)
%%% authors: C. Himpe (0000-0003-2194-6754), S. Rave (0000-0003-0439-7212)
%%% license: BSD 2-Clause License (opensource.org/licenses/BSD-2-Clause)
%%% summary: Fast distributed or incremental POD computation.
Expand Down Expand Up @@ -58,7 +58,7 @@
% passed to the parent nodes in the associated HAPOD tree.
%
% CITE AS:
% C. Himpe, T. Leibner and S. Rave.
% C. Himpe, T. Leibner, S. Rave:
% "Hierarchical Approximate Proper Orthogonal Decomposition".
% SIAM Journal on Scientific Computing, 40(5): A3267--A3292, 2018.
%
Expand All @@ -72,7 +72,7 @@
%
% Further information: https://git.io/hapod

if strcmp(data,'version'), svec = 3.1; return; end%if
if strcmp(data,'version'), svec = 3.2; return; end%if

% Default arguments
if nargin<3 || isempty(topo), topo = 'none'; end%if
Expand Down

0 comments on commit 794e833

Please sign in to comment.