Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds documentation and an example for flink HA #196

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 30 additions & 3 deletions flink/1.8/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@

- A running DC/OS 1.8 cluster with 1 agents with each 2 CPU and 2 GB of RAM available.
- [DC/OS CLI](https://dcos.io/docs/1.8/usage/cli/install/) installed.
- HDFS installed, as by default High Availability (HA) is enabled. See more in the specific section later on in this readme as you need also to create an hdfs path for flink recovery metadata. To start without HA, disable it by un-checking the `ENABLED` option:

![HA enabled](img/ha_enabled.png)

## Install Flink

Expand Down Expand Up @@ -76,6 +79,33 @@ core@ip-10-0-6-55 ~ $ docker run -it mesosphere/dcos-flink:1.2.0-1.2 /bin/bash

root@2a9c01d3594e:/flink-1.2.0# ./bin/flink run -m <jobmangerhost>:<jobmangerjobmanager.rpc.port> ./examples/batch/WordCount.jar --input file:///etc/resolv.conf --output file:///etc/wordcount_out
```
### Running with HA

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few spelling mistakes in the below draft.

For clarity would you mind stating that the below is the client configuration.


There are several settings affecting HA mode which you could check at relevant configuration section. Default values are specified in order to launch the service automatically.
Before launching Flink service you need to create a directory on HDFS which must
have the same root path as the one specified in STORAGE-DIR property and as leaf folder the service name (default: hdfs://hdfs/flink/recovery/flink), the default service name is flink.

Login to a container as described previously:

Download the hadoop config files needed to access hdfs in dc/os:
root@ddfbaadb2094:/flink-1.3.1# wget http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml

root@ddfbaadb2094:/flink-1.3.1# wget http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/core-site.xml

Add to the conf/flink-conf.yaml file th efollowing options:

high-availability: zookeeper
high-availability.zookeeper.quorum: master.mesos:2181
high-availability.zookeeper.storageDir: hdfs://hdfs/flink/recovery/flink
high-availability.zookeeper.path.root: /dcos-service-flink/flink
fs.hdfs.hadoopconf: /flink-1.3.1

Note: the above high availability options are also set in the flink service in the
ui and they must be a match. The baove values are the default ones.

Run a job:

root@ddfbaadb2094:/flink-1.3.1# ./bin/flink run -z /default-flink ./examples/batch/WordCount.jar --input file:///etc/resolv.conf --output file:///etc/wordcount_out

### DC/OS Flink CLI
Coming soon.
Expand All @@ -88,6 +118,3 @@ To uninstall Flink:
```bash
$ dcos package uninstall flink
```



Binary file added flink/1.8/img/ha_enabled.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified flink/1.8/img/scala2_11.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
31 changes: 31 additions & 0 deletions flink/1.9/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@

- A running DC/OS 1.9 cluster with 1 agents with each 2 CPU and 2 GB of RAM available.
- [DC/OS CLI](https://dcos.io/docs/1.9/usage/cli/install/) installed.
- HDFS installed, as by default High Availability (HA) is enabled. See more in the specific section later on in this README as you need also to create an hdfs path for flink recovery metadata. To start without HA, disable it by un-checking the `ENABLED` option:

![HA enabled](img/ha_enabled.png)

## Install Flink

Expand Down Expand Up @@ -79,6 +82,34 @@ core@ip-10-0-6-55 ~ $ docker run -it mesosphere/dcos-flink:1.3.1-1.0 /bin/bash
root@178cdd4e4f70:/flink-1.3-SNAPSHOT# cd /flink-1.3.1/ && ./bin/flink run -m <jobmangerhost>:<jobmangerjobmanager.rpc.port> ./examples/batch/WordCount.jar --input file:///etc/resolv.conf --output file:///etc/wordcount_out
```

### Running with HA

There are several settings affecting HA mode which you could check at relevant configuration section. Default values are specified in order to launch the service automatically.
Before launching Flink service you need to create a directory on HDFS which must
have the same root path as the one specified in STORAGE-DIR property and as leaf folder the service name (default: hdfs://hdfs/flink/recovery/flink), the default service name is flink.

Login to a container as described previously:

Download the hadoop config files needed to access hdfs in dc/os:
root@ddfbaadb2094:/flink-1.3.1# wget http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/hdfs-site.xml

root@ddfbaadb2094:/flink-1.3.1# wget http://api.hdfs.marathon.l4lb.thisdcos.directory/v1/endpoints/core-site.xml

Add to the conf/flink-conf.yaml file th efollowing options:

high-availability: zookeeper
high-availability.zookeeper.quorum: master.mesos:2181
high-availability.zookeeper.storageDir: hdfs://hdfs/flink/recovery/flink
high-availability.zookeeper.path.root: /dcos-service-flink/flink
fs.hdfs.hadoopconf: /flink-1.3.1

Note: the above high availability options are also set in the flink service in the
ui and they must be a match. The baove values are the default ones.

Run a job:

root@ddfbaadb2094:/flink-1.3.1# ./bin/flink run -z /default-flink ./examples/batch/WordCount.jar --input file:///etc/resolv.conf --output file:///etc/wordcount_out

### DC/OS Flink CLI
Coming soon.

Expand Down
Binary file added flink/1.9/img/ha_enabled.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified flink/1.9/img/scala2_11.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.