Skip to content

Commit fea2cab

Browse files
WMS ID <>: 1840 (oracle-livelabs#192)
Quarterly review
1 parent 6bb457b commit fea2cab

27 files changed

+95
-93
lines changed

fpp/apply-terraform-stack/apply-terraform-stack.md

+16-16
Original file line numberDiff line numberDiff line change
@@ -24,62 +24,62 @@ For more information about Terraform and Resource Manager, please see the append
2424
2. Login to Oracle Cloud
2525
3. Select the correct region
2626
4. Open up the hamburger menu in the left hand corner. Select **Developer Services**. Under **Resource Manager** select **Stacks**.
27-
![](./images/select-stacks.png)
27+
![Stacks](./images/select-stacks.png)
2828

2929
5. In the left pane, select the compartment where you would like to create the environment. It is recommended that you choose an empty compartment.
30-
![](./images/select-compartment.png)
30+
![Compartment](./images/select-compartment.png)
3131

3232
6. Click on Create Stack
33-
![](./images/01-resmgr-compartment.png)
33+
![Create_stack](./images/01-resmgr-compartment.png)
3434
7. Select **My Configuration**, choose the **.ZIP FILE** button, click the **Browse** link and select the zip file (tf-fpp-1.2.1.zip) that you downloaded. Click **Select**.
35-
![](./images/02-resmgr-zip.png)
35+
![Browse and select_zip](./images/02-resmgr-zip.png)
3636
8. Review the information. You can leave the default values.
37-
![](./images/03-resmgr-stack-info.png)
37+
![Review information](./images/03-resmgr-stack-info.png)
3838
9. Click **Next**
3939
10. In the **Configure Variables** step, select the appropriate Availability Domain (depending on your region you may need to specify it or not)
4040
11. Paste your **SSH Public Key**, you will need the corresponding private key to access the FPP Server once the environment will be provisioned
4141
12. The variable **ResID** is Optional. You can use it to add a specific suffix to the Display Name of the Cloud Resources.
4242

43-
![](./images/04-resmgr-stack-variables.png)
43+
![Check_Variables_given](./images/04-resmgr-stack-variables.png)
4444
13. Click **Next**
4545
14. Review the **Stack Information** and click **Create**
46-
![](./images/create-stack.png)
46+
![Create stack](./images/create-stack.png)
4747

4848
Your Stack has now been created!
49-
![](./images/stack-created.png)
49+
![Stack created](./images/stack-created.png)
5050

5151
## Task 2: Terraform Plan (OPTIONAL)
5252
This is optional, you may skip directly to Step 3.
5353

5454
When using Resource Manager to deploy an environment, execute a **Terraform plan** to verify the configuration.
5555
1. **[OPTIONAL]** Click **Plan** to validate your configuration.
56-
![](./images/plan-job.png)
56+
![Plan job](./images/plan-job.png)
5757

5858
2. Select the **Plan** button in the bottom right of the screen. This takes about a minute, please be patient.
59-
![](./images/plan-job2.png)
59+
![Press Plan Job button](./images/plan-job2.png)
6060

6161
## Task 3: Terraform Apply
6262
When using Resource Manager to deploy an environment, execute a **Terraform Apply** to actually create the configuration. Let's do that now.
6363

6464
1. At the top of your page, click on **Stack Details**. click the **Apply** button.
6565

66-
![](./images/apply-job.png)
66+
![Apply job](./images/apply-job.png)
6767

6868
2. Select the **Apply** button in the bottom right of the screen. This will create your cloud network, the db system and the compute instance. The job will take some time (90-120 minutes).
69-
![](./images/apply-job2.png)
69+
![Press Apply job on the side panel](./images/apply-job2.png)
7070

7171
3. Once this job succeeds, you will get an apply complete notification from Terraform. Examine it closely.
7272

73-
![](./images/05-resmgr-apply-succeeded.png)
73+
![Apply Job succeeded](./images/05-resmgr-apply-succeeded.png)
7474

7575
4. In the left pane, click **Outputs**
7676
5. Click on **Show** next to **fppserver** to show the IP address of the FPP Server. Note it down, you will need it to access it as **opc** user with the private key that you have supplied when creating the stack.
77-
![](./images/06-resmgr-ip-addresses.png)
77+
![Get the ip adresses](./images/06-resmgr-ip-addresses.png)
7878

7979
You may now [proceed to the next lab](#next) and connect to the server.
8080

8181
## Acknowledgements
8282

8383
- **Author** - Ludovico Caldara
84-
- **Contributors** - Kamryn Vinson
85-
- **Last Updated By/Date** - Kamryn Vinson, April 2021
84+
- **Contributors** - Kamryn Vinson - Philippe Fierens
85+
- **Last Updated By/Date** - Philippe Fierens, March 2023
Loading
Loading
Loading
Loading
Loading
Loading
694 KB
Loading
Loading
Loading
659 KB
Loading
642 KB
Loading
Loading
Loading

fpp/create-db/create-db.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ In this lab, you will:
3535
-dbname fpplive1_site1 -datafileDestination DATA -dbtype SINGLE \
3636
-sudouser opc -sudopath /bin/sudo
3737
```
38-
![](./images/fpp.png)
39-
![](./images/fpp2.png)
38+
![Output of database creationg part 1](./images/fpp.png)
39+
![Output of database creationg part 2](./images/fpp2.png)
4040
4141
Notice that you have not specified the target name: the FPP server knows what is the target node (or cluster) because the working copy named `WC_db_previous_FPPC` has been provisioned there. This information is stored in the FPP metadata schema.
4242
@@ -52,7 +52,7 @@ In this lab, you will:
5252
sudo su - oracle
5353
```
5454
55-
![](./images/opc.png)
55+
![Log in as opc](./images/opc.png)
5656
5757
2. As user `oracle`, set the environment for the new database:
5858
@@ -62,19 +62,19 @@ In this lab, you will:
6262
The Oracle base has been set to /u01/app/oracle
6363
[oracle@fppc ~]$
6464
```
65-
![](./images/oraenv.png)
65+
![Set environment variables with oraenv](./images/oraenv.png)
6666
6767
3. Check the status of the database with `srvctl` and `sqlplus`:
6868
6969
```
7070
srvctl status database -db fpplive1_site1 -verbose
7171
```
72-
![](./images/check-status.png)
72+
![Check status of the database](./images/check-status.png)
7373
7474
```
7575
sqlplus / as sysdba
7676
```
77-
![](./images/sql.png)
77+
![Log in with sqlplus](./images/sql.png)
7878
7979
```
8080
set lines 220
@@ -87,12 +87,12 @@ In this lab, you will:
8787
```
8888
exit
8989
```
90-
![](./images/exit.png)
90+
![Check the patches installed](./images/exit.png)
9191
9292
The database is there, wasn't that easy? You may now [proceed to the next lab](#next) and try to patch it.
9393
9494
## Acknowledgements
9595
9696
- **Author** - Ludovico Caldara
97-
- **Contributors** - Kamryn Vinson
98-
- **Last Updated By/Date** - Kamryn Vinson, May 2021
97+
- **Contributors** - Kamryn Vinson - Philippe Fierens
98+
- **Last Updated By/Date** - Philippe Fierens, March 2023

fpp/create-db/images/exit.png

-41 KB
Loading

fpp/create-db/images/sql.jpg

91.5 KB
Loading

fpp/create-db/images/sql.png

15.8 KB
Loading

fpp/db-home/db-home.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ In this lab, you will:
5353
```
5454
rhpctl query image
5555
```
56-
![](./images/verify.png)
56+
![Check images in FPP repository](./images/verify.png)
5757
5858
2. Then, provision the first DB image to the target. The opc password is always `FPPll##123` unless you have changed it (Est. 8-9 minutes):
5959
@@ -63,8 +63,8 @@ In this lab, you will:
6363
-targetnode fppc -path /u01/app/oracle/product/19.0.0.0/WC_db_previous_FPPC \
6464
-sudouser opc -sudopath /bin/sudo
6565
```
66-
![](./images/first-db.png)
67-
![](./images/first-db2.png)
66+
![Add working copy based on image db_previous output 1](./images/first-db.png)
67+
![Add working copy based on image db_previous noutput 2](./images/first-db2.png)
6868
6969
## Task 2: Provision the second workingcopy
7070
1. Provision the second DB image to the target (Est. 8-9 minutes), **please note the additional -groups** parameter passed here:
@@ -76,16 +76,16 @@ In this lab, you will:
7676
-groups OSDBA=dba,OSOPER=oper,OSBACKUP=backupdba,OSDG=dgdba,OSKM=kmdba,OSRAC=racdba \
7777
-sudouser opc -sudopath /bin/sudo
7878
```
79-
![](./images/second-db.png)
80-
![](./images/second-db2.png)
79+
![Add working copy based on image db_current_oci output 1](./images/second-db.png)
80+
![Add working copy based on image db_current_oci output 2](./images/second-db2.png)
8181
8282
## Task 3: Verify the working copies
8383
1. On the server:
8484
8585
```
8686
rhpctl query workingcopy
8787
```
88-
![](./images/verify-wc.png)
88+
![Query working copies](./images/verify-wc.png)
8989
9090
2. On the client: password is always FPPll##123 unless you have changed it
9191
@@ -96,17 +96,17 @@ In this lab, you will:
9696
```
9797
sudo su - oracle
9898
```
99-
![](./images/opc.png)
99+
![Login with opc user](./images/opc.png)
100100
101101
```
102102
cat /u01/app/oraInventory/ContentsXML/inventory.xml
103103
```
104-
![](./images/inventory.png)
104+
![Check the contents of inventory.xml](./images/inventory.png)
105105
106106
All the database homes are there! Now they are ready to run databases. You may now [proceed to the next lab](#next) and provision a database.
107107
108108
## Acknowledgements
109109
110110
- **Author** - Ludovico Caldara
111-
- **Contributors** - Kamryn Vinson
112-
- **Last Updated By/Date** - Kamryn Vinson, May 2021
111+
- **Contributors** - Kamryn Vinson - Philippe Fierens
112+
- **Last Updated By/Date** - Philippe Fierens, March 2023

fpp/environment/environment.md

+21-21
Original file line numberDiff line numberDiff line change
@@ -44,24 +44,24 @@ To create your LiveLabs reservation, you used a ssh key that you created on your
4444
2. Under **List Scope**, verify that you select the **same compartment** that you received in the reservation confirmation.
4545
3. To start the Oracle Cloud Shell, click the Cloud Shell icon at the top right of the page. *Note: Ensure before you click the console you have selected your assigned compartment or you will get an error.*
4646

47-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshellopen.png " ")
47+
![Open Cloud Shell](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshellopen.png " ")
4848

49-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshellsetup.png " ")
49+
![Setup Cloud Shell](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshellsetup.png " ")
5050

51-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshell.png " ")
51+
![Cloud Shell](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/cloudshell.png " ")
5252

5353
2. Click on the Cloud Shell hamburger icon and select **Upload** to upload your private key
5454

55-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key.png " ")
55+
![Click on upload in cloud shell to upload private key](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key.png " ")
5656

5757
3. To connect to the compute instance that was created for you, you will need to load your private key. This is the key that does *not* have a .pub file at the end. Locate that file on your machine and click **Upload** to process it.
5858

59-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select.png " ")
59+
![Select the key](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select.png " ")
6060

6161
4. Be patient while the key file uploads to your Cloud Shell directory
62-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select-2.png " ")
62+
![Wait until key is uploaded](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select-2.png " ")
6363

64-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select-3.png " ")
64+
![Key is uploaded](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-select-3.png " ")
6565

6666
5. Once finished run the command below to check to see if your ssh key was uploaded. Create a .ssh directory, and move the ssh key into your .ssh directory
6767

@@ -78,7 +78,7 @@ To create your LiveLabs reservation, you used a ssh key that you created on your
7878
cd ~
7979
````
8080
81-
![](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-finished.png " ")
81+
![Key is in the cloud shell](https://oracle-livelabs.github.io/common/labs/generate-ssh-key-cloud-shell/images/upload-key-finished.png " ")
8282
</if>
8383
## Task 1: Connect to the FPP Server via SSH
8484
1. Connect to the FPP Server via SSH using the user `opc` and the private key that you have created during the LiveLab setup.
@@ -88,7 +88,7 @@ E.g. if you have a terminal with ssh available:
8888
````
8989
ssh -i ~/.ssh/<sshkeyname> opc@<Your Compute Instance Public IP Address>
9090
````
91-
![](./images/opc.png)
91+
![Logon with opc](./images/opc.png)
9292
9393
If you are using other clients (Putty, MobaXTerm) and you are unsure about how to use private keys, please refer to their respective documentation.
9494
@@ -97,7 +97,7 @@ E.g. if you have a terminal with ssh available:
9797
```
9898
sudo su - grid
9999
```
100-
![](./images/grid.png)
100+
![sudo to grid](./images/grid.png)
101101
102102
103103
## Task 2: Verify the Clusterware status and rhpserver status
@@ -106,22 +106,22 @@ E.g. if you have a terminal with ssh available:
106106
```
107107
crsctl stat res -t
108108
```
109-
![](./images/crsctl.png)
110-
![](./images/crsctl2.png)
109+
![crsctl stat res -t output](./images/crsctl.png)
110+
![crsctl stat res -t output continued](./images/crsctl2.png)
111111
112112
2. In the above output, you can already see that the FPP Server (rhpserver) is running, but you can double-check:
113113
114114
```
115115
srvctl status rhpserver
116116
```
117-
![](./images/check-status.png)
117+
![srvctl status rhpserer output](./images/check-status.png)
118118
119119
3. You can see how the FPP Server has been configured:
120120
121121
```
122122
srvctl config rhpserver
123123
```
124-
![](./images/server-configured.png)
124+
![srvctl config rhpserver output](./images/server-configured.png)
125125
126126
In particular, the *Transfer port range* has been customized from the default so that is uses a fixed port range (by default it is dynamic and would require permissive firewall rules).
127127
@@ -131,20 +131,20 @@ E.g. if you have a terminal with ssh available:
131131
```
132132
rhpctl -help
133133
```
134-
![](./images/help.png)
134+
![rhpctl -help output](./images/help.png)
135135
136136
137137
```
138138
rhpctl import -help
139139
```
140-
![](./images/import-help.png)
140+
![rhpctl import -help output](./images/import-help.png)
141141
142142
143143
```
144144
rhpctl import image -help
145145
```
146-
![](./images/import-image-help.png)
147-
![](./images/import-image-help2.png)
146+
![rhpctl import image -help output](./images/import-image-help.png)
147+
![rhpctl import image -help output continued](./images/import-image-help2.png)
148148
149149
150150
## Task 4: Find the rhpserver.log
@@ -153,7 +153,7 @@ E.g. if you have a terminal with ssh available:
153153
```
154154
ls -l /u01/app/grid/crsdata/fpps01/rhp/
155155
```
156-
![](./images/fpp-logfiles.png)
156+
![File location of logfile](./images/fpp-logfiles.png)
157157
158158
2. The main log file is `rhpserver.log.0`, you can use it during the workshop to verify what happens. It is verbose, but useful whenever you encounter any problems.
159159
@@ -169,5 +169,5 @@ You have now successfully connected and verified the environment. You may now [p
169169
## Acknowledgements
170170
171171
- **Author** - Ludovico Caldara
172-
- **Contributors** - Kamryn Vinson
173-
- **Last Updated By/Date** - Kamryn Vinson, May 2021
172+
- **Contributors** - Kamryn Vinson - Philippe Fierens
173+
- **Last Updated By/Date** - Philippe Fierens, March 2023

fpp/gi-home/gi-home.md

+11-11
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ Estimated lab time: 15 minutes
88
```
99
rhpctl add workingcopy -help GRIDHOMEPROV
1010
```
11-
![](./images/workingcopy.png)
12-
![](./images/workingcopy2.png)
11+
![Working copy command options](./images/workingcopy.png)
12+
![Working copy command options continued](./images/workingcopy2.png)
1313

1414
### Software Only provisioning
1515
If the target server already has a GI stack (either Oracle Restart or a full GI stack), then the new working copy is provisioned as Software Only: the existing stack is untouched.
@@ -62,7 +62,7 @@ In this lab, you will:
6262
```
6363
wget https://github.com/oracle-livelabs/database/raw/main/fpp/gi-home/files/fppc.rsp
6464
```
65-
![](./images/download.png)
65+
![Output of download of fppc.rsp file](./images/download.png)
6666
6767
## Task 2: Provision the Restart environment on a new target using the response file
6868
1. On the FPP Server, run the following command to provision and configure the GI home on the target. The password is `FPPll##123`. (Est. 8 minutes)
@@ -73,9 +73,9 @@ In this lab, you will:
7373
-path /u01/app/grid/WC_gi_current_FPPC -user oracle -oraclebase /u01/app/oracle \
7474
-targetnode fppc -sudouser opc -sudopath /bin/sudo -ignoreprereq
7575
```
76-
![](./images/provision.png)
77-
![](./images/provision2.png)
78-
![](./images/provision3.png)
76+
![Output of add workingcopy command part 1](./images/provision.png)
77+
![Output of add workingcopy command part 2](./images/provision2.png)
78+
![Output of add workingcopy command part 3](./images/provision3.png)
7979
8080
## Task 3: Connect to the target and verify the Restart Environment
8181
1. From either the FPP Server or your SSH client, connect as `opc` to the FPP target public IP address and become `oracle`. The password is FPPll##123
@@ -87,7 +87,7 @@ In this lab, you will:
8787
```
8888
sudo su - oracle
8989
```
90-
![](./images/opc.png)
90+
![Login with opc user](./images/opc.png)
9191
9292
2. Set the environment.
9393
@@ -97,19 +97,19 @@ In this lab, you will:
9797
ORACLE_HOME = [/home/oracle] ? /u01/app/grid/WC_gi_current_FPPC
9898
The Oracle base has been set to /u01/app/oracle
9999
```
100-
![](./images/oraenv.png)
100+
![Set environment variables with oraenv](./images/oraenv.png)
101101
102102
3. Verify that Restart is up and running:
103103
104104
```
105105
crsctl stat res -t
106106
```
107-
![](./images/crsctl.png)
107+
![Show the output of crsctl stat res -t](./images/crsctl.png)
108108
109109
Congratulations! You have successfully configured an Oracle Restart environment with a single command. Easy, huh? You may now [proceed to the next lab](#next).
110110
111111
## Acknowledgements
112112
113113
- **Author** - Ludovico Caldara
114-
- **Contributors** - Kamryn Vinson
115-
- **Last Updated By/Date** - Kamryn Vinson, May 2021
114+
- **Contributors** - Kamryn Vinson - Philippe Fierens
115+
- **Last Updated By/Date** - Philippe Fierens, March 2023

0 commit comments

Comments
 (0)