Skip to content

Commit

Permalink
ohs_24.4.1_minorDocFixes (#229)
Browse files Browse the repository at this point in the history
  • Loading branch information
manjunathdhegde-2910 authored Nov 26, 2024
1 parent 7ca9213 commit 2fb80ae
Show file tree
Hide file tree
Showing 8 changed files with 194 additions and 197 deletions.
90 changes: 45 additions & 45 deletions docs-source/content/ohs/create-ohs-container/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ The nodeport is the entry point for OHS. For example `http://ohs.example.com:317
1. Edit the `$MYOHSFILES/ohs_service.yaml` and make the following changes:

+ `<NAMESPACE>` to your namespace, for example `ohsns`.
+ If you want your OHS node port to listen on something other that 31777 and 31443, change accordingly
+ If you are using your own `httpd.conf` file and have changed the port to anything other than `7777`, you must change the `targetPort` and `port` to match.
+ If you are using your own `ssl.conf` file and have changed the port to anything other than `4443`, you must change the `targetPort` and `port` to match.
+ If you want your OHS node port to listen on something other that 31777 and 31443, change accordingly
+ If you are using your own `httpd.conf` file and have changed the port to anything other than `7777`, you must change the `targetPort` and `port` to match.
+ If you are using your own `ssl.conf` file and have changed the port to anything other than `4443`, you must change the `targetPort` and `port` to match.


```
Expand Down Expand Up @@ -63,33 +63,33 @@ The nodeport is the entry point for OHS. For example `http://ohs.example.com:317

**Note**: Administrators should be aware of the following:

+ As this is a Kubernetes service the port is accessible on all the worker nodes in the cluster.
+ If you create another OHS container on a different port, you will need to create another nodeport service for that OHS.
+ As this is a Kubernetes service the port is accessible on all the worker nodes in the cluster.
+ If you create another OHS container on a different port, you will need to create another nodeport service for that OHS.


```
$ kubectl create -f $MYOHSFILES/ohs_service.yaml
$ kubectl create -f $MYOHSFILES/ohs_service.yaml
```

The output will look similar to the following:
The output will look similar to the following:

```
service/ohs-domain-nodeport created
```

1. Validate the service has been created using the command:

```
```
$ kubectl get service -n <namespace>
```
```

For example:
For example:

```
$ kubectl get service -n ohsns
```
```

The output will look similar to the following:
The output will look similar to the following:

```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
Expand Down Expand Up @@ -117,50 +117,50 @@ In this section you create the OHS container using the `ohs.yaml` file created i
deployment.apps/ohs-domain created
```

Run the following command to view the status of the pods:
Run the following command to view the status of the pods:

```bash
```bash
$ kubectl get pods -n <namespace> -w
```

For example:
For example:


```bash
$ kubectl get pods -n ohsns -w
```

Whilst the OHS container is creating you, may see:
Whilst the OHS container is creating you, may see:

```
```
NAME READY STATUS RESTARTS AGE
ohs-domain-d5b648bc5-vkp4s 0/1 ContainerCreating 0 2m13s
```

To check what is happening while the pod is in `ContainerCreating` status, you can run:
To check what is happening while the pod is in `ContainerCreating` status, you can run:

```
kubectl describe pod <podname> -n <namespace>
```
```
kubectl describe pod <podname> -n <namespace>
```

For example:
For example:

```
$ kubectl describe pod ohs-domain-d5b648bc5-vkp4s -n ohsns
```
```
$ kubectl describe pod ohs-domain-d5b648bc5-vkp4s -n ohsns
```

Once the container is created, it will go to a `READY` status of `0/1` with `STATUS` of `Running`. For example:
Once the container is created, it will go to a `READY` status of `0/1` with `STATUS` of `Running`. For example:

```
NAME READY STATUS RESTARTS AGE
ohs-domain-d5b648bc5-vkp4s 1/1 Running 0 3m10s
```

To check what is happening while the pod is in this status, you can run:
To check what is happening while the pod is in this status, you can run:

```
$ kubectl logs -f <pod> -n <namespace>
```
```
$ kubectl logs -f <pod> -n <namespace>
```
Once everything is started you should see the OHS is running (`READY 1/1`):

Expand All @@ -169,7 +169,7 @@ In this section you create the OHS container using the `ohs.yaml` file created i
ohs-domain-d5b648bc5-vkp4s 1/1 Running 0 4m10s
```

If there are any failures, follow [Troubleshooting](../troubleshooting).
If there are any failures, follow [Troubleshooting](../troubleshooting).



Expand All @@ -186,49 +186,49 @@ To validate the OHS container file system:
1. Run the following command to get the name of the OHS container:


```bash
```bash
$ kubectl get pods -n <namespace>
```

For example:
For example:


```bash
$ kubectl get pods -n ohsns
```

The output will look similar to the following:
The output will look similar to the following:

```
```
NAME READY STATUS RESTARTS AGE
ohs-domain-d5b648bc5-vkp4s 1/1 Running 0 5m34s
```

1. Run the following command to create a bash shell inside the container:

```
kubectl exec -n <namespace> -ti <pod> -- /bin/bash
```
$ kubectl exec -n <namespace> -ti <pod> -- /bin/bash
```

For example:
For example:

```
kubectl exec -n ohsns -ti ohs-domain-79f8f99575-8qwfh -- /bin/bash
```
$ kubectl exec -n ohsns -ti ohs-domain-79f8f99575-8qwfh -- /bin/bash
```

This will take you to a bash shell inside the container:
This will take you to a bash shell inside the container:

```
[oracle@ohs-domain-75fbd9b597-z77d8 oracle]$
```
```
[oracle@ohs-domain-75fbd9b597-z77d8 oracle]$
```

1. Inside the bash shell navigate to the `/u01/oracle/user_projects/domains/ohsDomain/config/fmwconfig/components/OHS/ohs1/` directory:

```
cd /u01/oracle/user_projects/domains/ohsDomain/config/fmwconfig/components/OHS/ohs1/
```

From within this directory, you can navigate around and list (`ls`) or `cat` any files you configured using the configmaps.
From within this directory, you can navigate around and list (`ls`) or `cat` any files you configured using the configmaps.



Expand Down
63 changes: 31 additions & 32 deletions docs-source/content/ohs/create-or-update-image/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,33 +69,33 @@ To set up the WebLogic Image Tool:

1. Execute the following commands to set up the WebLogic Image Tool:

```bash
$ cd <workdir>/imagetool-setup/imagetool/bin
$ source setup.sh
```
```bash
$ cd <workdir>/imagetool-setup/imagetool/bin
$ source setup.sh
```

For example:
For example:

```bash
$ cd /scratch/imagetool-setup/imagetool/bin
$ source setup.sh
```
```bash
$ cd /scratch/imagetool-setup/imagetool/bin
$ source setup.sh
```

##### Validate setup
To validate the setup of the WebLogic Image Tool:

1. Enter the following command to retrieve the version of the WebLogic Image Tool:

``` bash
$ imagetool --version
```
``` bash
$ imagetool --version
```

1. Enter `imagetool` then press the Tab key to display the available `imagetool` commands:

``` bash
$ imagetool <TAB>
cache create help rebase update
```
``` bash
$ imagetool <TAB>
cache create help rebase update
```

##### WebLogic Image Tool build directory

Expand Down Expand Up @@ -155,14 +155,13 @@ You must download the required Oracle HTTP Server installation binaries and patc
The installation binaries and patches required are:

* Oracle Web Tier 12.2.1.4.0
* V983369-01.zip
* V983369-01.zip

* Oracle JDK v8
* jdk-8uXXX-linux-x64.tar.gz
* jdk-8uXXX-linux-x64.tar.gz

* Oracle Database 19c Upgrade for FMW 12.2.1.4.0 (OID/OHS/OTD homes only)

* Patch 34761383 DB Client 19c Upgrade for FMW 12.2.1.4.0 (OID/OHS/OTD homes only)
* Patch 34761383 DB Client 19c Upgrade for FMW 12.2.1.4.0 (OID/OHS/OTD homes only)

##### Update required build files

Expand Down Expand Up @@ -190,17 +189,17 @@ The following files are used for creating the image:
COPY --chown=oracle:root files/create-sa-ohs-domain.py files/configureWLSProxyPlugin.sh files/mod_wl_ohs.conf.sample files/provisionOHS.sh files/start-ohs.py files/stop-ohs.py files/helloWorld.html /u01/oracle/
WORKDIR ${ORACLE_HOME}
CMD ["/u01/oracle/provisionOHS.sh"]
```
```

**Note:** `oracle:root` is used for OpenShift which has more stringent policies. Users who do not want those permissions can change to the permissions they require.
**Note:** `oracle:root` is used for OpenShift which has more stringent policies. Users who do not want those permissions can change to the permissions they require.


1. Create the `<workdir>/imagetool-setup/docker-images/OracleHTTPServer/buildArgs` file as follows and change the following:

+ `<workdir>` to your working directory, for example `/scratch/`
+ `%BUILDTAG%` to the tag you want create for the image, for example `oracle/ohs:12.2.1.4-db19`
+ `%JDK_VERSION%` to the version of your JDK, for example `8uXXX`
+ `<user>` to your [My Oracle Support](https://support.oracle.com) username
+ `%BUILDTAG%` to the tag you want create for the image, for example `oracle/ohs:12.2.1.4-db19`
+ `%JDK_VERSION%` to the version of your JDK, for example `8uXXX`
+ `<user>` to your [My Oracle Support](https://support.oracle.com) username

```
create
Expand Down Expand Up @@ -235,7 +234,7 @@ The following files are used for creating the image:
```


Refer to [this page](https://oracle.github.io/weblogic-image-tool/userguide/tools/create-image/) for the complete list of options available with the WebLogic Image Tool `create` command.
Refer to [this page](https://oracle.github.io/weblogic-image-tool/userguide/tools/create-image/) for the complete list of options available with the WebLogic Image Tool `create` command.

##### Create the image

Expand Down Expand Up @@ -284,14 +283,14 @@ The following files are used for creating the image:
1. If you want to see what patches were installed, you can run:

```
imagetool inspect --image=<REPOSITORY>:<TAG> --patches
```
$ imagetool inspect --image=<REPOSITORY>:<TAG> --patches
```

For example:
For example:

```
imagetool inspect --image=oracle/ohs:12.2.1.4-db19 --patches
```
```
$ imagetool inspect --image=oracle/ohs:12.2.1.4-db19 --patches
```

1. Run the following command to save the container image to a tar file:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,14 @@ The following commands show how to remove the OHS container, OHS nodeport servic
$ kubectl delete cm -n ohsns webgate-config
$ kubectl delete cm -n ohsns webgate-wallet
$ kubectl delete cm -n ohsns ohs-wallet
```
```

1. Run the following command to delete the secrets:

```
$ kubectl delete secret regcred -n ohsns
$ kubectl delete secret ohs-secret -n ohsns
```
$ kubectl delete secret regcred -n ohsns
$ kubectl delete secret ohs-secret -n ohsns
```

1. Run the following command to delete the namespace:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ The number of OHS Servers running is dependent on the `replicas` parameter confi
$ kubectl -n <namespace> patch deployment ohs-domain -p '{"spec": {"replicas": <replica count>}}'
```

where `<replica count>` is the number of OHS servers to start.
where `<replica count>` is the number of OHS servers to start.

In the example below, two additional OHS servers are started:

Expand All @@ -66,7 +66,7 @@ The number of OHS Servers running is dependent on the `replicas` parameter confi
$ kubectl get pods -n <namespace> -w
```

For example:
For example:

```bash
$ kubectl get pods -n ohsns -w
Expand All @@ -81,7 +81,7 @@ The number of OHS Servers running is dependent on the `replicas` parameter confi
ohs-domain-d5b648bc5-vkp4s 1/1 Running 0 5h21m
```

Two new OHS pods have now been created, in this example `ohs-domain-d5b648bc5-2q8bw` and `ohs-domain-d5b648bc5-qvdjn`.
Two new OHS pods have now been created, in this example `ohs-domain-d5b648bc5-2q8bw` and `ohs-domain-d5b648bc5-qvdjn`.

1. To check what is happening while the pods are in `ContainerCreating` status, you can run:

Expand All @@ -91,9 +91,9 @@ The number of OHS Servers running is dependent on the `replicas` parameter confi

1. To check what is happening while the pods are in `0/1 Running` status, you can run:

```
$ kubectl logs -f <pod> -n <namespace>
```
```
$ kubectl logs -f <pod> -n <namespace>
```

1. Once everything is started you should see all the additional OHS containers are running (`READY 1/1`):

Expand All @@ -116,7 +116,7 @@ As mentioned in the previous section, the number of OHS servers running is depen
$ kubectl -n <namespace> patch deployment ohs-domain -p '{"spec": {"replicas": <replica count>}}'
```

where `<replica count>` is the number of OHS servers you want to run.
where `<replica count>` is the number of OHS servers you want to run.

In the example below, replicas is dropped to `1` so only one OHS is running:

Expand Down Expand Up @@ -149,7 +149,6 @@ As mentioned in the previous section, the number of OHS servers running is depen
ohs-domain-d5b648bc5-2q8bw 0/1 Terminating 0 12m
ohs-domain-d5b648bc5-qvdjn 0/1 Terminating 0 12m
ohs-domain-d5b648bc5-vkp4s 1/1 Running 0 5h31m
```

Two pods now have a `STATUS` of `Terminating`. Keep executing the command until the pods have disappeared and you are left with the one OHS pod:
Expand Down
Loading

0 comments on commit 2fb80ae

Please sign in to comment.