diff --git a/23cfree/change-pw/change-pw-sql.md b/23cfree/change-pw/change-pw-sql.md
index f73b3bd31..2f54c103e 100644
--- a/23cfree/change-pw/change-pw-sql.md
+++ b/23cfree/change-pw/change-pw-sql.md
@@ -2,13 +2,14 @@
## Introduction
-Resetting the password for the hol23c user in the Oracle Database and starting up ORDS, which will be needed to start up other applications.
+Resetting the password for the hol23c user in the Oracle Database.
Estimated Time: 5 minutes
### Objectives
In this lab, you will:
+* Open SQL Plus
* Set the password for the hol23c user
### Prerequisites
@@ -17,7 +18,7 @@ This lab assumes you have:
* Oracle Database 23c Free Developer Release
* A terminal or console access to the database
-## Task 1: Setting database user password and starting ORDS
+## Task 1: Setting database user password
1. The first step is to get to a command prompt. If you need to open a terminal and you are running in a Sandbox environment click on Activities and then Terminal.
@@ -31,9 +32,6 @@ This lab assumes you have:
[FREE:oracle@hol23cfdr:~]$
```
-
-
-
3. Next connect to your database.
```
[FREE:oracle@hol23cfdr:~]$ sqlplus / as sysdba
@@ -64,10 +62,10 @@ This lab assumes you have:
5. To change the password for the user hol23c use the "alter user \[username\] identified by \[new password\]" command. The syntax below for the hol23c user, make sure to replace new\_password\_here to your new password. Throughout this workshop we will use the Welcome123 password.
```
- alter user hol23c identified by [new_password_here];
+ alter user hol23c identified by [new_password_here];
```
```
- SQL> alter user hol23c identified by Welcome123;
+ SQL> alter user hol23c identified by Welcome123;
User altered.
@@ -75,7 +73,7 @@ This lab assumes you have:
```
![Change password](images/change-password1.png " ")
-6. Once the password has been changed you can exit SQL Plus.
+6. Once the password has been changed you can exit SQL Plus as sysdba.
```
SQL> exit
@@ -86,18 +84,6 @@ Version 23.2.0.0.0
![Exit](images/exit1.png " ")
-7. To start ORDS, from the same command prompt use the following command. The output of [1] 204454 is just an example, your output could be different.
-
- ```
- [FREE:oracle@hol23cfdr:~]$ ords serve > /dev/null 2>&1 &
-[1] 204454
-[FREE:oracle@hol23cfdr:~]$
- ```
-
- >**NOTE:** You must leave this terminal open and the process running. Closing either will stop ORDS from running, and you will not be able to access other applications that are used in this lab.
-
- ![Start ORDS](images/ords1.png " ")
-
You may now **proceed to the next lab**.
## Learn More
@@ -107,4 +93,4 @@ You may now **proceed to the next lab**.
## Acknowledgements
* **Author** - Kaylien Phan, William Masdon
* **Contributors** - David Start
-* **Last Updated By/Date** - Hope Fisher, Program Manager, June 2023
+* **Last Updated By/Date** - Hope Fisher, Program Manager, Oct 2023
diff --git a/23cfree/introduction/intro-js-generic.md b/23cfree/introduction/intro-js-generic.md
index 0f1c525b1..0cb397bc2 100644
--- a/23cfree/introduction/intro-js-generic.md
+++ b/23cfree/introduction/intro-js-generic.md
@@ -8,7 +8,7 @@ In addition to PL/SQL and Java it is now possible to leverage the Smart DB parad
This workshop introduces JavaScript in Oracle Database 23c on Linux x86-64 and walks you through all the steps necessary to be productive with the new language. It complements [Oracle Database JavaScript Developer's Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/mlejs/index.html). You will use both command-line tools as well as a graphical user interface when creating code.
-> **Note:** There is a strong focus on command line tools for a reason: many software projects rely on automation (keywords Continuous Integration/Continuous Delivery). Graphical user interfaces don't work in this workflow, however anything you can control on the command line does. This workshop aims at preparing you for working with Continuous Integration (CI) pipelines as much as possible. Note though that a wealth of IDEs exists for writing JavaScript code. Database Actions has strong support for JavaScript in Oracle Database 23c Free-Developer Release and you will see it used a lot.
+> **Note:** There is a strong focus on command line tools for a reason: many software projects rely on automation (keywords Continuous Integration/Continuous Delivery). Graphical user interfaces don't work in this workflow, however anything you can control on the command line does. This workshop aims at preparing you for working with Continuous Integration (CI) pipelines as much as possible. Note though that a wealth of Integrated Development Environments (IDEs) exists for writing JavaScript code. Database Actions has strong support for JavaScript in Oracle Database 23c Free Release and you will see it used a lot.
Estimated Workshop Time: 1 hours 30 minutes
@@ -34,4 +34,4 @@ You may now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 17-NOV-2023
diff --git a/23cfree/js-generic-functions/functions.md b/23cfree/js-generic-functions/functions.md
index 98a278d72..82f7ed4bc 100644
--- a/23cfree/js-generic-functions/functions.md
+++ b/23cfree/js-generic-functions/functions.md
@@ -6,8 +6,6 @@ After creating JavaScript modules and environments in the previous lab you will
Estimated Lab Time: 10 minutes
-[](videohub:1_ffqhyknx)
-
### Objectives
In this lab, you will:
@@ -21,7 +19,7 @@ In this lab, you will:
This lab assumes you have:
-- An Oracle Database 23c Free - Developer Release environment available to use
+- An Oracle Database 23c Free environment available to use
- Created the `emily` account as per Lab 1
- Completed Lab 2 where you created a number of JavaScript modules in the database
@@ -84,43 +82,48 @@ In this task you will learn how to create a call specification based on the MLE
```
LINE TEXT
- ----- -----------------------------------------------------------------------------------
- 1 function string2obj(inputString) {
- 2 if ( inputString === undefined ) {
- 3 throw `must provide a string in the form of key1=value1;...;keyN=valueN`;
- 4 }
- 5 let myObject = {};
- 6 if ( inputString.length === 0 ) {
- 7 return myObject;
- 8 }
- 9 const kvPairs = inputString.split(";");
- 10 kvPairs.forEach( pair => {
- 11 const tuple = pair.split("=");
- 12 if ( tuple.length === 1 ) {
- 13 tuple[1] = false;
- 14 } else if ( tuple.length != 2 ) {
- 15 throw "parse error: you need to use exactly one '=' between " +
- 16 "key and value and not use '=' in either key or value";
- 17 }
- 18 myObject[tuple[0]] = tuple[1];
- 19 });
- 20 return myObject;
- 21 }
- 22 /**
- 23 * convert a JavaScript object to a string
- 24 * @param {object} inputObject - the object to transform to a string
- 25 * @returns {string}
- 26 */
- 27 function obj2String(inputObject) {
- 28 if ( typeof inputObject != 'object' ) {
- 29 throw "inputObject isn't an object";
- 30 }
- 31 return JSON.stringify(inputObject);
- 32 }
- 33 export { string2obj, obj2String }
+ ----- ------------------------------------------------------------------------------------------
+ 1 /**
+ 2 * convert a delimited string into key-value pairs and return JSON
+ 3 * @param {string} inputString - the input string to be converted
+ 4 * @returns {JSON}
+ 5 */
+ 6 function string2obj(inputString) {
+ 7 if ( inputString === undefined ) {
+ 8 throw `must provide a string in the form of key1=value1;...;keyN=valueN`;
+ 9 }
+ 10 let myObject = {};
+ 11 if ( inputString.length === 0 ) {
+ 12 return myObject;
+ 13 }
+ 14 const kvPairs = inputString.split(";");
+ 15 kvPairs.forEach( pair => {
+ 16 const tuple = pair.split("=");
+ 17 if ( tuple.length === 1 ) {
+ 18 tuple[1] = false;
+ 19 } else if ( tuple.length != 2 ) {
+ 20 throw "parse error: you need to use exactly one '=' between " +
+ 21 "key and value and not use '=' in either key or value";
+ 22 }
+ 23 myObject[tuple[0]] = tuple[1];
+ 24 });
+ 25 return myObject;
+ 26 }
+ 27 /**
+ 28 * convert a JavaScript object to a string
+ 29 * @param {object} inputObject - the object to transform to a string
+ 30 * @returns {string}
+ 31 */
+ 32 function obj2String(inputObject) {
+ 33 if ( typeof inputObject != 'object' ) {
+ 34 throw "inputObject isn't an object";
+ 35 }
+ 36 return JSON.stringify(inputObject);
+ 37 }
+ 38 export { string2obj, obj2String }
```
- You can see in line 33 that both functions declared in the module are exported.
+ You can see in line 38 that both functions declared in the module are exported.
If you prefer a graphical user interface log into Database Actions and navigate to MLE JS from the Launchpad. Right-click on `HELPER_MODULE_INLINE` and select Edit from the context menu. This brings up the source code for the module:
@@ -128,7 +131,7 @@ In this task you will learn how to create a call specification based on the MLE
2. Create call specification for `helper_module_inline`
- You can see from the output above that both functions in the module are exported (line 35). This allows us to create call specifications. Before you go ahead and create one you need to decide whether you need a PL/SQL function or procedure. In the above case both JavaScript functions return data:
+ You can see from the output above that both functions in the module are exported (line 38). This allows us to create call specifications. Before you go ahead and create one you need to decide whether you need a PL/SQL function or procedure. In the above case both JavaScript functions return data:
- `string2obj(string)` returns a JavaScript object
- `object2String(object)` returns a string
@@ -220,9 +223,13 @@ Creating call specifications for functions exported by the `business_logic` modu
```sql
create mle module business_logic language javascript as
-
import { string2obj } from 'helpers';
-
+/**
+ * A simple function accepting a set of key-value pairs, translates it to JSON bef
+ * inserting the order in the database.
+ * @param {string} orderData a semi-colon separated string containing the order de
+ * @returns {boolean} true if the order could be processed successfully, false oth
+ */
export function processOrder(orderData) {
const orderDataJSON = string2obj(orderData);
const result = session.execute("...");
@@ -255,9 +262,10 @@ Before you can create a call specification for `processOrder()` you must ensure
You should see the following output:
```
- ENV_NAME IMPORT_NAME MODULE_NAME
- -------------------- ------------------------------ ------------------------------
- BUSINESS_MODULE_ENV helpers HELPER_MODULE_INLINE
+ ENV_NAME IMPORT_NAME MODULE_NAME
+ ------------------------------ ------------------------------ ------------------------------
+ BUSINESS_MODULE_ENV BUSINESS_LOGIC BUSINESS_LOGIC
+ BUSINESS_MODULE_ENV helpers HELPER_MODULE_INLINE
```
2. Create the call specification
@@ -454,7 +462,7 @@ In scenarios where you don't need the full flexibility of JavaScript modules and
## Task 6: View dictionary information about call specifications
-The data dictionary has been enhanced in Oracle Database 23c Free-Developer Release to provide information about call specifications. A new view, named `USER_MLE_PROCEDURES` provides the mapping between PL/SQL code units and JavaScript. There are of course corresponding _ALL/DBA/CDB_ views as well.
+The data dictionary has been enhanced in Oracle Database 23c Free to provide information about call specifications. A new view, named `USER_MLE_PROCEDURES` provides the mapping between PL/SQL code units and JavaScript. There are of course corresponding _ALL/DBA/CDB_ views as well.
1. Query `USER_MLE_PROCEDURES` to learn more about the existing call specifications
@@ -490,6 +498,7 @@ The data dictionary has been enhanced in Oracle Database 23c Free-Developer Rele
HELPER_PKG STRING2OBJ HELPER_MODULE_INLINE
ISEMAIL VALIDATOR
STRING2OBJ
+ STRING_TO_JSON HELPER_MODULE_BFILE
```
Due to the way the view is defined, you will sometimes see both `object_name` and `procedure_name` populated, while sometimes just `object_name` is populated and `procedure_name` is null.
@@ -510,4 +519,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 28-NOV-2023
diff --git a/23cfree/js-generic-functions/images/sdw-call-spec-env-details.jpg b/23cfree/js-generic-functions/images/sdw-call-spec-env-details.jpg
index 9b3b6a8df..ac83d1ca4 100644
Binary files a/23cfree/js-generic-functions/images/sdw-call-spec-env-details.jpg and b/23cfree/js-generic-functions/images/sdw-call-spec-env-details.jpg differ
diff --git a/23cfree/js-generic-functions/images/sdw-simple-call-spec-details.jpg b/23cfree/js-generic-functions/images/sdw-simple-call-spec-details.jpg
index 7859d3d7a..7b885eb8f 100644
Binary files a/23cfree/js-generic-functions/images/sdw-simple-call-spec-details.jpg and b/23cfree/js-generic-functions/images/sdw-simple-call-spec-details.jpg differ
diff --git a/23cfree/js-generic-functions/images/sdw-simple-call-spec.jpg b/23cfree/js-generic-functions/images/sdw-simple-call-spec.jpg
index c5b2df548..1be0c436b 100644
Binary files a/23cfree/js-generic-functions/images/sdw-simple-call-spec.jpg and b/23cfree/js-generic-functions/images/sdw-simple-call-spec.jpg differ
diff --git a/23cfree/js-generic-functions/images/sdw-source-code.jpg b/23cfree/js-generic-functions/images/sdw-source-code.jpg
index 77bb51f03..9846c3ab7 100644
Binary files a/23cfree/js-generic-functions/images/sdw-source-code.jpg and b/23cfree/js-generic-functions/images/sdw-source-code.jpg differ
diff --git a/23cfree/js-generic-get-started-example/get-started-example.md b/23cfree/js-generic-get-started-example/get-started-example.md
index aa0e17139..bb78dded1 100644
--- a/23cfree/js-generic-get-started-example/get-started-example.md
+++ b/23cfree/js-generic-get-started-example/get-started-example.md
@@ -8,8 +8,6 @@ Before jumping into the description of JavaScript features and all their details
Estimated Time: 10 minutes
-[](videohub:1_d307bfag)
-
### Objectives
In this lab, you will:
@@ -24,8 +22,8 @@ In this lab, you will:
This lab assumes you have:
-- Oracle Database 23c Free - Developer Release
-- You have a working noVNC environment or comparable setup
+- Access to an Oracle Database 23c Free instance
+- Sufficient privileges to create a database user
## Task 1: Create a schema to store the JavaScript module
@@ -79,7 +77,7 @@ All the steps in this lab can either be completed in `sqlplus` or `sqlcl`. The i
In this step you prepare the creation of the developer account. The instructions in the following snippet create a new account, named `emily`. It will be used to store JavaScript modules in the database.
- Save the snippet in a file, for example `${HOME}/setup.sql` and execute it in `sqlcl` or `sqlplus`. You can use graphical text editors installed on the system via the Activities button or the command line.
+ Save the snippet in a file, for example `${HOME}/hol23c/setup.sql` and execute it in `sqlcl` or `sqlplus`. You can use graphical text editors installed on the system via the Activities button or the command line.
```sql
set echo on
@@ -104,7 +102,7 @@ All the steps in this lab can either be completed in `sqlplus` or `sqlcl`. The i
You should still be connected to `freebdb1` as `SYS` as per the previous step. If not, connect to `freepdb1` as `SYS` before executing the following command:
```sql
- start ${HOME}/setup.sql
+ start ${HOME}/hol23c/setup.sql
```
Here is some sample output of an execution:
@@ -201,7 +199,7 @@ curl -Lo /home/oracle/hol23c/validator.min.js 'https://objectstorage.us-ashburn-
## Task 3: Create the JavaScript module in the database
-JavaScript in Oracle Database 23c Free - Developer Release allows you to load JavaScript modules using the `BFILE` clause, specifying a directory object and file name. You prepared for the `create mle module` command in the previous step, now it's time to execute it:
+JavaScript in Oracle Database 23c Free allows you to load JavaScript modules using the `BFILE` clause, specifying a directory object and file name. You prepared for the `create mle module` command in the previous step, now it's time to execute it:
1. Connect to the database as the `emily` user:
@@ -244,7 +242,7 @@ JavaScript in Oracle Database 23c Free - Developer Release allows you to load Ja
VALIDATOR JAVASCRIPT
```
-You can read more about creating JavaScript modules in Oracle Database 23c Free - Developer release in chapter 2 of the JavaScript Developer's Guide.
+You can read more about creating JavaScript modules in Oracle Database 23c Free in chapter 2 of the JavaScript Developer's Guide.
## Task 4: Expose the module's functionality to PL/SQL and SQL
@@ -313,4 +311,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 17-NOV-2023
diff --git a/23cfree/js-generic-json/json.md b/23cfree/js-generic-json/json.md
index c1d983a7f..a9752ac3f 100644
--- a/23cfree/js-generic-json/json.md
+++ b/23cfree/js-generic-json/json.md
@@ -2,23 +2,21 @@
## Introduction
-JSON, short for JavaScript Object Notation, has become the de-facto standard data interchange format and is a very popular for storing data. Oracle's Converged Database has supported JSON for many years, adding functionality with each release on top of an already impressive base. Oracle Database 23c Free - Developer Release is no exception.
+JSON, short for JavaScript Object Notation, has become the de-facto standard data interchange format and is a very popular for storing data. Oracle's Converged Database has supported JSON for many years, adding functionality with each release on top of an already impressive base. Oracle Database 23c Free is no exception.
-You already got a glimpse of JSON in `processOrder()`, part of the `business_logic` module. This function is called with a string argument. The string is made up of a series of key-value pairs, each separated by a semi-colon each. The input parameter was subsequently translated to a JSON object and used in an insert statement showcasing the `json_table` function.
+You already got a glimpse of JSON in `processOrder()`, part of the `business_logic` module. This function is called with a string argument. The string is made up of a series of key-value pairs, each separated by a semi-colon each. The input parameter is subsequently translated to a JSON object and used in an insert statement showcasing the `json_table` function.
> **Note:** You could have stored the JSON document in a JSON column in the table directly, but then you wouldn't have seen how easy it is to convert JSON to a relational format
-In this lab you will learn about an alternative way of working with JSON, based on the document object model.
+In this lab you will learn about an alternative way of working with JSON, based on the Simple Document Access Model (SODA).
Estimated Lab Time: 10 minutes
-[](videohub:1_hr7cf8kj)
-
### Objectives
In this lab, you will:
-- Understand how to work with JSON using the document model (Simple Oracle Document Access (SODA))
+- Understand how to work with JSON using the document model (SODA)
- Create SODA collections
- Add documents to a collection
- Search for a specific document in a collection
@@ -30,7 +28,7 @@ In this lab, you will:
This lab assumes you have:
-- An Oracle Database 23c Free - Developer Release environment available to use
+- An Oracle Database 23c Free environment available to use
- Created the `emily` account as per Lab 1
## Task 1: Create a database session
@@ -345,7 +343,7 @@ The previous lab (concerning the JavaScript SQL driver) introduced a major diffe
```
- The procedure should complete successfully. After the prompt is returned a new SODA collection will have been created. Under the covers Oracle will create a table named `myCollection` containing the JSON document and some metadata.
+ The procedure should complete successfully. After the prompt is returned a new SODA collection will have been created. Under the covers Oracle will create a table named `myCollection`, eventually containing the JSON document and some metadata.
2. Add documents to a collection
@@ -486,7 +484,7 @@ The previous lab (concerning the JavaScript SQL driver) introduced a major diffe
```sql
- col result for a30
+ col result for a90
select
json_serialize(
soda_demo_pkg.find_emp_by_ename('myCollection', 'JONES')
@@ -503,18 +501,19 @@ The previous lab (concerning the JavaScript SQL driver) introduced a major diffe
4 pretty) as result;
RESULT
- ------------------------------
+ ----------------------------------------------
[
- {
- "empno" : 7566,
- "ename" : "JONES",
- "job" : "MANAGER",
- "mgr" : 7839,
- "hiredate" : "1981-04-02",
- "sal" : 2975,
- "comm" : 0,
- "deptno" : 20
- }
+ {
+ "_id" : "6565FB0BAC2A5D0167125F2D",
+ "comm" : 0,
+ "deptno" : 20,
+ "empno" : 7566,
+ "ename" : "JONES",
+ "hiredate" : "1981-04-02",
+ "job" : "MANAGER",
+ "mgr" : 7839,
+ "sal" : 2975
+ }
]
```
@@ -672,7 +671,7 @@ The previous lab (concerning the JavaScript SQL driver) introduced a major diffe
NUMBER_OF_EMPLOYEES_BEFORE
--------------------------
- 2
+ 2
SQL> begin
2 soda_demo_pkg.delete_document('myCollection', 7566);
@@ -742,4 +741,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 02-MAY-2023
+- **Last Updated By/Date** - Martin Bach 28-NOV-2023
diff --git a/23cfree/js-generic-modules-environments/images/sdw-login.jpg b/23cfree/js-generic-modules-environments/images/sdw-login.jpg
index e988def3c..c8f54afab 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-login.jpg and b/23cfree/js-generic-modules-environments/images/sdw-login.jpg differ
diff --git a/23cfree/js-generic-modules-environments/images/sdw-main-page.jpg b/23cfree/js-generic-modules-environments/images/sdw-main-page.jpg
index e829d4850..a5f780c41 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-main-page.jpg and b/23cfree/js-generic-modules-environments/images/sdw-main-page.jpg differ
diff --git a/23cfree/js-generic-modules-environments/images/sdw-mle-associate-env-with-module.jpg b/23cfree/js-generic-modules-environments/images/sdw-mle-associate-env-with-module.jpg
index 55cd5ee23..3b5f5231f 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-mle-associate-env-with-module.jpg and b/23cfree/js-generic-modules-environments/images/sdw-mle-associate-env-with-module.jpg differ
diff --git a/23cfree/js-generic-modules-environments/images/sdw-mle-env-editor.jpg b/23cfree/js-generic-modules-environments/images/sdw-mle-env-editor.jpg
index a5a4b2d51..f056a137f 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-mle-env-editor.jpg and b/23cfree/js-generic-modules-environments/images/sdw-mle-env-editor.jpg differ
diff --git a/23cfree/js-generic-modules-environments/images/sdw-mle-module-dependencies.jpg b/23cfree/js-generic-modules-environments/images/sdw-mle-module-dependencies.jpg
index 790ecc5b9..e54f6a248 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-mle-module-dependencies.jpg and b/23cfree/js-generic-modules-environments/images/sdw-mle-module-dependencies.jpg differ
diff --git a/23cfree/js-generic-modules-environments/images/sdw-mle-module-editor.jpg b/23cfree/js-generic-modules-environments/images/sdw-mle-module-editor.jpg
index 4d42e3689..bf2b8c33b 100644
Binary files a/23cfree/js-generic-modules-environments/images/sdw-mle-module-editor.jpg and b/23cfree/js-generic-modules-environments/images/sdw-mle-module-editor.jpg differ
diff --git a/23cfree/js-generic-modules-environments/modules-environments.md b/23cfree/js-generic-modules-environments/modules-environments.md
index e00b8826b..68ac6afad 100644
--- a/23cfree/js-generic-modules-environments/modules-environments.md
+++ b/23cfree/js-generic-modules-environments/modules-environments.md
@@ -2,12 +2,10 @@
## Introduction
-After the previous lab introduced JavaScript in Oracle Database 23c Free - Developer Release you will now learn more about Multilingual Engine (MLE) modules and environments. Modules are similar in concept to PL/SQL packages as they allow you to logically group code in a single namespace. Just as with PL/SQL you can create public and private functions. MLE modules contain JavaScript code expressed in terms of ECMAScript modules.
+After the previous lab introduced JavaScript in Oracle Database 23c Free you will now learn more about Multilingual Engine (MLE) modules and environments. Modules are similar in concept to PL/SQL packages as they allow you to logically group code in a single namespace. Just as with PL/SQL you can create public and private functions. MLE modules contain JavaScript code expressed in terms of ECMAScript modules.
Estimated Lab Time: 10 minutes
-[](videohub:1_n99vou1t)
-
### Objectives
In this lab, you will:
@@ -21,7 +19,7 @@ In this lab, you will:
This lab assumes you have:
-- An Oracle Database 23c Free - Developer Release environment available to use
+- An Oracle Database 23c Free environment available to use
- Created the `emily` account as per Lab 1
## Task 1: Create a database session
@@ -34,7 +32,7 @@ Connect to the pre-created Pluggable Database (PDB) `freepdb1` using the same cr
## Task 2: Create JavaScript modules
-A JavaScript module is a unit of MLE's language code stored in the database as a schema object. Storing code within the database is one of the main benefits of using JavaScript in Oracle Database 23c Free-Developer Release: rather than having to manage a fleet of application servers each with their own copy of the application, the database takes care of this for you.
+A JavaScript module is a unit of MLE's language code stored in the database as a schema object. Storing code within the database is one of the main benefits of using JavaScript in Oracle Database 23c: rather than having to manage a fleet of application servers each with their own copy of the application, the database takes care of this for you.
In addition, Data Guard replication ensures that the exact same code is present in both production and all physical standby databases. This way configuration drift, a common problem bound to occur when invoking the disaster recovery location, can be mitigated.
@@ -185,7 +183,52 @@ Database Actions is a web-based interface that uses Oracle REST Data Services (O
![Database Actions main screen](images/sdw-main-page.jpg)
- With the editor (not Snippet) pane open, paste the JavaScript portion of the code you used for `helper_module_inline` into the editor pane, assign a name to the module (`HELPER_MODULE_ORDS`) and use the disk icon to persist the module in the database.
+ With the editor (not Snippet) pane open, paste the following JavaScript code into the editor pane, assign a name to the module (`HELPER_MODULE_ORDS`) and use the disk icon to persist the module in the database.
+
+ ```javascript
+ /**
+ * convert a delimited string into key-value pairs and return JSON
+ * @param {string} inputString - the input string to be converted
+ * @returns {JSON}
+ */
+ function string2obj(inputString) {
+ if ( inputString === undefined ) {
+ throw `must provide a string in the form of key1=value1;...;keyN=valueN`;
+ }
+ let myObject = {};
+ if ( inputString.length === 0 ) {
+ return myObject;
+ }
+ const kvPairs = inputString.split(";");
+ kvPairs.forEach( pair => {
+ const tuple = pair.split("=");
+ if ( tuple.length === 1 ) {
+ tuple[1] = false;
+ } else if ( tuple.length != 2 ) {
+ throw "parse error: you need to use exactly one '=' " +
+ " between key and value and not use '=' in either key or value";
+ }
+ myObject[tuple[0]] = tuple[1];
+ });
+ return myObject;
+ }
+
+ /**
+ * convert a JavaScript object to a string
+ * @param {object} inputObject - the object to transform to a string
+ * @returns {string}
+ */
+ function obj2String(inputObject) {
+ if ( typeof inputObject != 'object' ) {
+ throw "inputObject isn't an object";
+ }
+ return JSON.stringify(inputObject);
+ }
+
+ export { string2obj, obj2String }
+ ```
+
+ This is what it should look like:
![Database Actions module editor](images/sdw-mle-module-editor.jpg)
@@ -195,7 +238,7 @@ Database Actions is a web-based interface that uses Oracle REST Data Services (O
1. Reference existing modules
- The more modular your code, the more reusable it is. JavaScript modules in Oracle Database 23c Free-Developer Release can reference other modules easily, allowing developers to follow a divide and conquer approach designing applications. The code shown in the following snippet makes use of the module `helper_module_inline` created earlier to convert a string representing an order before inserting it into a table.
+ The more modular your code, the more reusable it is. JavaScript modules in Oracle Database 23c can reference other modules easily, allowing developers to follow a divide and conquer approach designing applications. The code shown later in this lab makes use of the module `helper_module_inline` created earlier to convert a string representing an order before inserting it into a table.
> **Note**: Lab 4 will explain the use of the JavaScript SQL Driver in more detail.
@@ -224,7 +267,12 @@ Database Actions is a web-based interface that uses Oracle REST Data Services (O
create mle module business_logic language javascript as
import { string2obj } from 'helpers';
-
+ /**
+ * A simple function accepting a set of key-value pairs, translates it to JSON before
+ * inserting the order in the database.
+ * @param {string} orderData a semi-colon separated string containing the order details
+ * @returns {boolean} true if the order could be processed successfully, false otherwise
+ */
export function processOrder(orderData) {
const orderDataJSON = string2obj(orderData);
@@ -274,7 +322,7 @@ Database Actions is a web-based interface that uses Oracle REST Data Services (O
The `business_logic` module introduces a new concept: an (ECMAScript) `import` statement. `string2JSON()`, defined in the helpers module is imported into the module's namespace.
-3. Create an environment
+3. Create and edit an environment
The following snippet creates an environment mapping the import name `helpers` as seen in the `business_logic` module to `helper_module_inline`
@@ -291,6 +339,8 @@ Database Actions is a web-based interface that uses Oracle REST Data Services (O
![Database Actions MLE Environment editor](images/sdw-mle-env-editor.jpg)
+ You can see that list of imported modules on the right-hand side of the wizard displays an import name `helpers`, mapping to `helper_module_inline`. Add the `business_logic` module to the list of imported modules by selecting it in the list of available modules, followed by a click on the `>` symbol. Finally click on the apply button to persist the change.
+
The environment will play a crucial role when exposing JavaScript code to SQL and PL/SQL, a topic that will be covered in the next lab (Lab 3).
Database Actions provides a handy way of viewing code dependencies based on a given combination of module/environment. `BUSINESS_LOGIC` is the only module importing functionality provided by another module, and serves as an example.
@@ -456,4 +506,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 28-NOV-2023
diff --git a/23cfree/js-generic-post-execution-debugging/images/dynamic-debugging-output.jpg b/23cfree/js-generic-post-execution-debugging/images/dynamic-debugging-output.jpg
deleted file mode 100644
index 83c475bf7..000000000
Binary files a/23cfree/js-generic-post-execution-debugging/images/dynamic-debugging-output.jpg and /dev/null differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec-wizard.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec-wizard.jpg
index 4070c233c..3112797bf 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec-wizard.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec-wizard.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec.jpg
index 2bf764cbf..c65076a97 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-create-debug-spec.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard.jpg
index ea11814d9..69cb2c42a 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard2.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard2.jpg
index 7c39a901c..df4ae0887 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard2.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env-wizard2.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env.jpg
index 5436651ce..74a9b006a 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-create-mle-env.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/images/sdw-debug-info.jpg b/23cfree/js-generic-post-execution-debugging/images/sdw-debug-info.jpg
index 958aec91e..2a336195b 100644
Binary files a/23cfree/js-generic-post-execution-debugging/images/sdw-debug-info.jpg and b/23cfree/js-generic-post-execution-debugging/images/sdw-debug-info.jpg differ
diff --git a/23cfree/js-generic-post-execution-debugging/post-execution-debugging.md b/23cfree/js-generic-post-execution-debugging/post-execution-debugging.md
index 9f716c118..c08ea47fb 100644
--- a/23cfree/js-generic-post-execution-debugging/post-execution-debugging.md
+++ b/23cfree/js-generic-post-execution-debugging/post-execution-debugging.md
@@ -4,14 +4,12 @@
Oracle's JavaScript Engine allows developers to debug their code by conveniently and efficiently collecting runtime state during program execution. After the code has finished executing, the collected data can be used to analyze program behavior and discover and fix bugs. This form of debugging is known as _post-execution debugging_.
-The post-execution debug option enables developers to instrument their code by specifying debugpoints in the code. Debugpoints allow you to log program state conditionally or unconditionally, including values of individual variables as well as execution snapshots. Debugpoints are specified as JSON documents separate from the application code. No change to the application is necessary for debug points to fire.
+The post-execution debug option enables developers to instrument their code by specifying so-called debug points in the code. Debug points allow you to log program state conditionally or unconditionally, including values of individual variables as well as execution snapshots. Debug points are specified as JSON documents separate from the application code. No change to the application is necessary for debug points to fire.
When activated, debug information is collected according to the debug specification and can be fetched for later analysis by a wide range of tools thanks to its standard Java Profiler Heap Dump version 1.0.1 as defined in JDK6 format.
Estimated Lab Time: 15 minutes
-[](videohub:1_hag5m05i)
-
### Objectives
In this lab, you will:
@@ -26,7 +24,7 @@ In this lab, you will:
This lab assumes you have:
-- An Oracle Database 23c Free - Developer Release environment available to use
+- An Oracle Database 23c Free environment available to use
- Created the `emily` account as per Lab 1
- Completed Lab 2 where you created a number of JavaScript modules in the database
@@ -57,60 +55,64 @@ Actions include printing the value of a single variable (`watch` point) or takin
```
SQL> select line, text from user_source where name = 'BUSINESS_LOGIC';
- LINE TEXT
- ---- ----------------------------------------------------------------------------
- 1 import { string2obj } from 'helpers';
- 2
- 3 export function processOrder(orderData) {
- 4
- 5 const orderDataJSON = string2obj(orderData);
- 6 const result = session.execute(`
- 7 insert into orders (
- 8 order_id,
- 9 order_date,
- 10 order_mode,
- 11 customer_id,
- 12 order_status,
- 13 order_total,
- 14 sales_rep_id,
- 15 promotion_id
- 16 )
- 17 select
- 18 jt.*
- 19 from
- 20 json_table(:orderDataJSON, '$' columns
- 21 order_id path '$.order_id',
- 22 order_date timestamp path '$.order_date',
- 23 order_mode path '$.order_mode',
- 24 customer_id path '$.customer_id',
- 25 order_status path '$.order_status',
- 26 order_total path '$.order_total',
- 27 sales_rep_id path '$.sales_rep_id',
- 28 promotion_id path '$.promotion_id'
- 29 ) jt`,
- 30 {
- 31 orderDataJSON: {
- 32 val: orderDataJSON,
- 33 type: oracledb.DB_TYPE_JSON
- 34 }
- 35 }
- 36 );
- 37
- 38 if ( result.rowsAffected === 1 ) {
- 39 return true;
- 40 } else {
- 41 return false;
- 42 }
- 43 }
-
- 43 rows selected.
+ LINE TEXT
+ ----- ------------------------------------------------------------------------------------------
+ 1 import { string2obj } from 'helpers';
+ 2 /**
+ 3 * A simple function accepting a set of key-value pairs, translates it to JSON bef
+ 4 * inserting the order in the database.
+ 5 * @param {string} orderData a semi-colon separated string containing the order de
+ 6 * @returns {boolean} true if the order could be processed successfully, false oth
+ 7 */
+ 8 export function processOrder(orderData) {
+ 9
+ 10 const orderDataJSON = string2obj(orderData);
+ 11 const result = session.execute(`
+ 12 insert into orders (
+ 13 order_id,
+ 14 order_date,
+ 15 order_mode,
+ 16 customer_id,
+ 17 order_status,
+ 18 order_total,
+ 19 sales_rep_id,
+ 20 promotion_id
+ 21 )
+ 22 select
+ 23 jt.*
+ 24 from
+ 25 json_table(:orderDataJSON, '$' columns
+ 26 order_id path '$.order_id',
+ 27 order_date timestamp path '$.order_date',
+ 28 order_mode path '$.order_mode',
+ 29 customer_id path '$.customer_id',
+ 30 order_status path '$.order_status',
+ 31 order_total path '$.order_total',
+ 32 sales_rep_id path '$.sales_rep_id',
+ 33 promotion_id path '$.promotion_id'
+ 34 ) jt`,
+ 35 {
+ 36 orderDataJSON: {
+ 37 val: orderDataJSON,
+ 38 type: oracledb.DB_TYPE_JSON
+ 39 }
+ 40 }
+ 41 );
+ 42 if ( result.rowsAffected === 1 ) {
+ 43 return true;
+ 44 } else {
+ 45 return false;
+ 46 }
+ 47 }
+
+ 47 rows selected.
```
2. Define the debug specification
In this step you create a debug specification with the following contents:
- - A watchpoint to print the contents of `orderDataJSON` in line 6
- - A snapshot of the entire stack in line 38
+ - A watchpoint to print the contents of `orderDataJSON` in line 11
+ - A snapshot of the entire stack in line 42
The debug specification consists primarily of an array of JavaScript objects defining which action to take at a given code location.
@@ -121,7 +123,7 @@ Actions include printing the value of a single variable (`watch` point) or takin
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 6
+ "line": 11
},
"actions": [
{
@@ -133,7 +135,7 @@ Actions include printing the value of a single variable (`watch` point) or takin
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 38
+ "line": 42
},
"actions": [
{
@@ -170,7 +172,7 @@ begin
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 6
+ "line": 11
},
"actions": [
{
@@ -182,7 +184,7 @@ begin
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 38
+ "line": 42
},
"actions": [
{
@@ -232,13 +234,13 @@ When executing the above code snippet the following information is printed on sc
{
"at": {
"name": "EMILY.BUSINESS_LOGIC",
- "line": 6
+ "line": 11
},
"values": {
"orderDataJSON": {
"customer_id": "1",
"order_date": "2023-04-24T10:27:52",
- "order_id": "1",
+ "order_id": "10",
"order_mode": "theMode",
"order_status": "2",
"order_total": "42",
@@ -252,18 +254,18 @@ When executing the above code snippet the following information is printed on sc
{
"at": {
"name": "EMILY.BUSINESS_LOGIC",
- "line": 38
+ "line": 42
},
"values": {
"result": {
"rowsAffected": 1
},
"this": {},
- "orderData": "order_id=10;order_date=2023-04-24T10:27:52;order_mode=theMode;customer_id=1;order_status=2;order_total=42;sales_rep_id=1;promotion_id=1",
+ "orderData": "order_id=10;order_date=2023-04-24T10:27:52;order_mode=theMode;customer_id=1;order_status=2;order_total=42;sales_rep_id=1;romotion_id=1",
"orderDataJSON": {
"customer_id": "1",
"order_date": "2023-04-24T10:27:52",
- "order_id": "1",
+ "order_id": "10",
"order_mode": "theMode",
"order_status": "2",
"order_total": "42",
@@ -288,11 +290,11 @@ You can see that both probes fired:
## Task 5: Use Database Actions to perform post-execution debugging
-Database Actions supports debugging with a nice, graphical user interface. Start by logging into Database Actions using the EMILY account. Once logged in, navigate to "MLE JS". Rather than using the Editor panel, this time you need to switch to Snippets.
+Database Actions supports debugging with a nice, graphical user interface. Start by logging into Database Actions using the EMILY account. Once logged in, navigate to "MLE JS". Rather than using the Editor panel, this time you need to switch to **Snippets**.
1. Create a JavaScript environment
- On the left-hand side of the screen select "Environments" from the drop down list. Next, click on the "..." icon and select "Create Environment" to open the "Create MLE Environment" Wizard.
+ On the left-hand side of the screen select "Environments" from the drop down list. Next, click on the "..." icon and select "Create Object" to open the "Create MLE Environment" Wizard.
![Prepare to create a new MLE Environment](images/sdw-create-mle-env.jpg)
@@ -343,7 +345,7 @@ Database Actions supports debugging with a nice, graphical user interface. Start
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 6
+ "line": 11
},
"actions": [
{
@@ -355,7 +357,7 @@ Database Actions supports debugging with a nice, graphical user interface. Start
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 38
+ "line": 42
},
"actions": [
{
@@ -377,9 +379,12 @@ Database Actions supports debugging with a nice, graphical user interface. Start
4. Run the code with debugging enabled
- Back in the Snippets editor click on the "Debug Snippet" button highlighted in red in the following screenshot to run the JavaScript snippet with debugging enabled. Focus will automatically switch to the Debug Console where you can see the results of the debug run:
+ Back in the Snippets editor make sure the newly created `business_logic_env` is selected. You may have to hit the circle icon first in case the
+ environment doesn't appear in the drop-down list.
+
+ Click on the "Debug Snippet" button highlighted in red in the following screenshot to run the JavaScript snippet with debugging enabled. Focus will automatically switch to the Debug Console where you can see the results of the debug run:
- - The watchpoint fired in line 6 showing the value of `orderDataJSON`
+ - The watchpoint fired in line 11 showing the value of `orderDataJSON`
- The second watchpoint fired as well, showing that exactly 1 row was affected by the insert statement
Clicking on little triangles expands the information provided, you can even click on the variable to see where in the code it is located.
@@ -417,7 +422,7 @@ In an ideal world post-execution debugging should be simple to enable without ha
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 6
+ "line": 11
},
"actions": [
{
@@ -429,7 +434,7 @@ In an ideal world post-execution debugging should be simple to enable without ha
{
"at": {
"name": "BUSINESS_LOGIC",
- "line": 38
+ "line": 42
},
"actions": [
{
@@ -597,7 +602,7 @@ In an ideal world post-execution debugging should be simple to enable without ha
declare
l_order_as_string varchar2(512);
begin
- l_order_as_string := 'order_id=13;order_date=2023-04-24T10:27:52;order_mode=theMode;customer_id=1;order_status=2;order_total=42;sales_rep_id=1;promotion_id=1';
+ l_order_as_string := 'order_id=20;order_date=2023-04-24T10:27:52;order_mode=theMode;customer_id=1;order_status=2;order_total=42;sales_rep_id=1;promotion_id=1';
business_logic_pkg.process_order(l_order_as_string, null);
exception
when others then
@@ -651,9 +656,7 @@ In an ideal world post-execution debugging should be simple to enable without ha
```sql
select
- json_serialize(md.debug_spec pretty) debug_spec,
- json_serialize(dbms_mle.parse_debug_output(debug_info) pretty) debug_info,
- (r.run_end - r.run_start) duration
+ json_serialize(dbms_mle.parse_debug_output(debug_info) pretty) debug_info
from
debug_metadata md
join debug_runs r
@@ -665,7 +668,66 @@ In an ideal world post-execution debugging should be simple to enable without ha
The query produces the following output:
- ![Output gathered by dynamic debugging](images/dynamic-debugging-output.jpg)
+ ```
+ DEBUG_INFO
+ ----------------------------------------------------------------------------------------
+ [
+ [
+ {
+ "at" :
+ {
+ "name" : "EMILY.BUSINESS_LOGIC",
+ "line" : 11
+ },
+ "values" :
+ {
+ "orderDataJSON" :
+ {
+ "customer_id" : "1",
+ "order_date" : "2023-04-24T10:27:52",
+ "order_id" : "21",
+ "order_mode" : "theMode",
+ "order_status" : "2",
+ "order_total" : "42",
+ "promotion_id" : "1",
+ "sales_rep_id" : "1"
+ }
+ }
+ }
+ ],
+ [
+ {
+ "at" :
+ {
+ "name" : "EMILY.BUSINESS_LOGIC",
+ "line" : 42
+ },
+ "values" :
+ {
+ "result" :
+ {
+ "rowsAffected" : 1
+ },
+ "this" :
+ {
+ },
+ "orderData" : "order_id=21;order_date=2023-04-24T10:27:52;order_mode=theMode;customer_id=1;order_status=2;order_total=42;sales_rep_id=1; promotion_id=1",
+ "orderDataJSON" :
+ {
+ "customer_id" : "1",
+ "order_date" : "2023-04-24T10:27:52",
+ "order_id" : "21",
+ "order_mode" : "theMode",
+ "order_status" : "2",
+ "order_total" : "42",
+ "promotion_id" : "1",
+ "sales_rep_id" : "1"
+ }
+ }
+ }
+ ]
+ ]
+ ```
Rather than displaying the JSON output on screen you can import it into any tool supporting its format and analyse it offline.
@@ -679,4 +741,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 28-NOV-2023
diff --git a/23cfree/js-generic-sql-driver/sql-driver.md b/23cfree/js-generic-sql-driver/sql-driver.md
index 0d589bc64..bd1ceabbf 100644
--- a/23cfree/js-generic-sql-driver/sql-driver.md
+++ b/23cfree/js-generic-sql-driver/sql-driver.md
@@ -6,8 +6,6 @@ All previous labs have carefully avoided accessing the data layer to ease the tr
Estimated Lab Time: 10 minutes
-[](videohub:1_2si62dv6)
-
### Objectives
In this lab, you will:
@@ -20,7 +18,7 @@ In this lab, you will:
This lab assumes you have:
-- An Oracle Database 23c Free - Developer Release environment available to use
+- An Oracle Database 23c Free environment available to use
- Created the `emily` account as per Lab 1
## Task 1: Get familiar with the SQL Driver
@@ -76,7 +74,7 @@ By completing this task, you will learn more about selecting information from th
```
- > **Note**: Unlike `node-oracledb` the default `outFormat` for the MLE JavaScript SQL Driver in 23c Free-Developer Release is `oracledb.OUT_FORMAT_OBJECT`.
+ > **Note**: Unlike `node-oracledb` the default `outFormat` for the MLE JavaScript SQL Driver in 23c Free is `oracledb.OUT_FORMAT_OBJECT`.
3. Query the database using global constants
@@ -512,4 +510,4 @@ You many now proceed to the next lab.
- **Author** - Martin Bach, Senior Principal Product Manager, ST & Database Development
- **Contributors** - Lucas Braun, Sarah Hirschfeld
-- **Last Updated By/Date** - Martin Bach 09-MAY-2023
+- **Last Updated By/Date** - Martin Bach 28-NOV-2023
diff --git a/23cfree/json-collections/images/collection-name.png b/23cfree/json-collections/images/collection-name.png
index a1dde909b..eea6027b6 100644
Binary files a/23cfree/json-collections/images/collection-name.png and b/23cfree/json-collections/images/collection-name.png differ
diff --git a/23cfree/json-collections/json-collections.md b/23cfree/json-collections/json-collections.md
index 46e50fed2..f9d4b255a 100644
--- a/23cfree/json-collections/json-collections.md
+++ b/23cfree/json-collections/json-collections.md
@@ -61,7 +61,8 @@ In this lab, you will:
![JSON Create Collection](./images/json-create-collection.png)
-7. In the field **Collection Name**, provide the name **movies**. MAKE SURE you check the **MongoDB Compatible** box then click **Create**.
+7. In the field **Collection Name**, provide the name **movies**. Then click **Create**.
+
Note that the collection name is case-sensitive. You must enter products in all lower-case, don't use MOVIES or Movies.
![New Collection: movies](./images/collection-name.png)
diff --git a/23cfree/json-mongo/json-mongo.md b/23cfree/json-mongo/json-mongo.md
index ba7e96185..6768511ed 100644
--- a/23cfree/json-mongo/json-mongo.md
+++ b/23cfree/json-mongo/json-mongo.md
@@ -34,7 +34,8 @@ This lab has you download software from the YUM repo at repo.mongodb.org. This s
Run the following commands to download and install Mongo Shell and Mongo Database Tools.
```
- $ echo "18.67.17.0 repo.mongodb.org" | sudo tee -a /etc/hosts
+ $ echo '65.8.161.52 downloads.mongodb.com' | sudo tee -a /etc/hosts
+ $ echo '18.65.185.55 repo.mongodb.org' | sudo tee -a /etc/hosts
$ sudo dnf install -y https://repo.mongodb.org/yum/redhat/8/mongodb-org/6.0/x86_64/RPMS/mongodb-mongosh-1.8.0.x86_64.rpm
$ sudo dnf install -y https://repo.mongodb.org/yum/redhat/8/mongodb-org/6.0/x86_64/RPMS/mongodb-database-tools-100.7.0.x86_64.rpm
```
diff --git a/23cfree/json-search/images/billion-gross.png b/23cfree/json-search/images/billion-gross.png
new file mode 100644
index 000000000..5e20cbd41
Binary files /dev/null and b/23cfree/json-search/images/billion-gross.png differ
diff --git a/23cfree/json-search/images/contains-query.png b/23cfree/json-search/images/contains-query.png
new file mode 100644
index 000000000..78b3a82fd
Binary files /dev/null and b/23cfree/json-search/images/contains-query.png differ
diff --git a/23cfree/json-search/images/de-niro-crew.png b/23cfree/json-search/images/de-niro-crew.png
new file mode 100644
index 000000000..c23a5feca
Binary files /dev/null and b/23cfree/json-search/images/de-niro-crew.png differ
diff --git a/23cfree/json-search/images/examine-json.png b/23cfree/json-search/images/examine-json.png
new file mode 100644
index 000000000..892f60189
Binary files /dev/null and b/23cfree/json-search/images/examine-json.png differ
diff --git a/23cfree/json-search/images/explain-plan-1.png b/23cfree/json-search/images/explain-plan-1.png
new file mode 100644
index 000000000..f51b1083d
Binary files /dev/null and b/23cfree/json-search/images/explain-plan-1.png differ
diff --git a/23cfree/json-search/images/explain-plan-2.png b/23cfree/json-search/images/explain-plan-2.png
new file mode 100644
index 000000000..b764e1482
Binary files /dev/null and b/23cfree/json-search/images/explain-plan-2.png differ
diff --git a/23cfree/json-search/images/explain-plan-advanced.png b/23cfree/json-search/images/explain-plan-advanced.png
new file mode 100644
index 000000000..4662d67d4
Binary files /dev/null and b/23cfree/json-search/images/explain-plan-advanced.png differ
diff --git a/23cfree/json-search/images/explain-with-index.png b/23cfree/json-search/images/explain-with-index.png
new file mode 100644
index 000000000..9159cfa1f
Binary files /dev/null and b/23cfree/json-search/images/explain-with-index.png differ
diff --git a/23cfree/json-search/images/fuzzy-match.png b/23cfree/json-search/images/fuzzy-match.png
new file mode 100644
index 000000000..6f61df0ea
Binary files /dev/null and b/23cfree/json-search/images/fuzzy-match.png differ
diff --git a/23cfree/json-search/images/search-index-creation.png b/23cfree/json-search/images/search-index-creation.png
new file mode 100644
index 000000000..a3c871835
Binary files /dev/null and b/23cfree/json-search/images/search-index-creation.png differ
diff --git a/23cfree/json-search/json-search.md b/23cfree/json-search/json-search.md
new file mode 100644
index 000000000..e1f11e053
--- /dev/null
+++ b/23cfree/json-search/json-search.md
@@ -0,0 +1,160 @@
+# Work with JSON Search Indexes
+
+## Introduction
+
+JSON Search indexes allow you to index all of the content in a JSON collection without knowing the schema of the JSON in advance. They also allow you to run full-text, or keyword, searches over textual values. In this lab, we'll create a search index on our Movies collection and show how that affects the query plan for JSON queries. We'll also do a variety of full-text searches using the CONTAINS and JSON_TEXTCONTAINS operators.
+
+Estimated Time: 15 minutes
+
+### Objectives
+
+In this lab, you will:
+
+- Create a search index
+- See that numeric searches are using the search index to speed retrieval
+- Perform full-text searches and explore the powerful full-text search capabilities
+
+### Prerequisites
+
+- Oracle Database 23c Free Developer Release
+- All previous labs successfully completed
+
+
+## Task 1: Run a query without an index
+
+This lab expects you to be in SQL Developer Web (Database Tools -> SQL) where you finished the last lab. If necessary, reopen a browser page and follow the instructions at the start of the previous lab.
+
+1. Run a query to find all movies which grossed over a billion dollars.
+
+ Enter the following SQL in the SQL Worksheet. The query fetches titles and gross takings (as a number) for all movies which grossed over 1 billion, ordered by gross takings:
+
+ ```
+
+ select m.data.title, m.data.gross.number() from movies m
+ where m.data.gross.number() > 1000000000
+ order by m.data.gross.number() desc
+
+ ```
+
+ Click the "Run" button and examine the results
+
+ ![Query for movies over one billion in gross takings](images/billion-gross.png " ")
+
+2. Examine the query plan
+
+ Above the worksheet, click on the "Explain Plan" button.
+
+ This will show the plan for the query in graphical format.
+
+ ![Explain plan in graphical format](images/explain-plan-1.png)
+
+ If you then click on the similar icon above the diagram (labelled "Advanced View"), the diagram will toggle to a table representing the plan.
+
+ ![Toggle for advanced view](images/explain-plan-advanced.png)
+
+ We can see that the query used a JSONTABLE evaluation.
+
+ ![Explain plan in shows jsontable](images/explain-plan-2.png)
+
+
+## Task 2: Create a Search Index over the JSON and show the query uses it
+
+In this task we'll create a Search Index. Search Indexes are created over the whole JSON column, and do not need to know the schema, or layout, of the JSON. A Search Index is created much like a regular ("BTREE") index - we give it an index name and the name of the table and JSON column to be indexed. The only difference is that we say CREATE SEARCH INDEX instead of CREATE INDEX, and add the suffix "FOR JSON" at the end (other flavors of Search Index can index text or XML).
+
+1. Copy the following SQL into the worksheet and click the "Run" button.
+
+ ```
+
+ create search index m_search_index on movies(data) for json;
+
+ ```
+
+ After a few seconds, you'll receive a notification that your index has been created.
+
+ ![Search index creation](images/search-index-creation.png)
+
+2. Now repeat the query we ran before looking for movies that grossed over 1 billion.
+
+ ```
+
+ select m.data.title, m.data.gross.number() from movies m
+ where m.data.gross.number() > 1000000000
+ order by m.data.gross.number() desc
+
+ ```
+
+ The results, of course, will be the same. But what happens when we look at the query plan? Click the explain plan button (and toggle the advanced view if you see the diagram output)
+
+ ![Query plan with index](images/explain-with-index.png)
+
+ We can now see that the query is using the index we created (M_SEARCH_INDEX) which means for searches on a large table, the query should run much quicker.
+
+## Task 3: Perform basic full-text searches
+
+A JSON Search index is, at its heart, an Oracle Text index. That means we can use the Oracle Text CONTAINS operator against it. Unlike most SQL operators, CONTAINS can _only_ be used when there is a suitable index present. CONTAINS searches for words within text. So let's do a search for the phrase "de niro" somewhere in the JSON data. The CONTAINS operator takes the column to search (DATA) and a query string. It returns 0 if there are no matches, and greater than zero for a match:
+
+1. Copy and run the following query:
+
+ ```
+
+ select json_serialize(data returning clob) from movies
+ where contains (data, 'de niro') > 0;
+
+ ```
+
+ Notice that we don't need to match case for "De Niro" - the CONTAINS operator is case-insensitive by default (it's actually the index which is case-insensitive, but we won't go into that for now).
+
+ ![CONTAINS query](images/contains-query.png)
+
+2. Examine the results
+
+ If you look at the JSON for a few results, you may find "Robert De Niro" listed as part of the cast (an actor) or part of the crew as producer or director. Although we only searched for "De Niro", we find fields containing "Robert De Niro" because the CONTAINS operator is doing a word search, rather than a full-field search. It's not a substring search either, as you'll see if you search for 'iro'.
+
+ ![examine JSON output](images/examine-json.png)
+
+## Task 4: Query specific JSON fields
+
+Using CONTAINS, we were able to search across the whole JSON column, and find occurences of words anywhere. But what if we want to do a word search in specific JSON fields? In that case, we can use a JSON-specific variant of the CONTAINS operator, __JSON_TextContains__.
+
+JSON_TextContains takes three arguments. Like CONTAINS, the first argument is the JSON column to search (DATA in our case). The second argument is a JSON PATH value, telling us which JSON field to search in. It's not a full JSON PATH - it can't contain conditional values, for example. But generally it does adhere to JSON PATH syntax. The third argument is the query string, similar to the second argument to CONTAINS.
+
+Note: Unlike CONTAINS, JSON_TextContains does not return a value - it is effectively a logic operator and only returns rows that match its arguments. So we don't need "> 0" after it.
+
+1. Copy and run the following query to search for 'De Niro' in the _crew_ field of the JSON. Crew is a top-level JSON field, so can be specified with the JSON path '$.crew'. For ease of reading the results, we'll fetch the title field and the crew array separately in the output.
+
+ ```
+
+ select m.data.title, json_serialize(m.data.crew) from movies m
+ where JSON_TextContains (data, '$.cast', 'de niro');
+ ```
+
+ This time we get a much shorter list of results, Robert De Niro is clearly more often found in cast than crew.
+
+ ![De Niro in cast](images/de-niro-crew.png)
+
+2. Exact word search is very useful, but sometimes you may not know how to spell a word. Or perhaps you do, but the person who wrote the text didn't. In that case, JSON_TextContains has a feature called _fuzzy search__ which will find words similar to the ones you're looking for.
+
+ Let's assume you're looking for the actors Shia LaBeouf and Stellan Skarsgård, but unsurprisingly we aren't sure how to spell either (we may not even know how to enter the accented character on our keyboard). We'll have a go at it by using 'Shiya Lebof' and 'Stellan Skarsguard'
+
+ ```
+
+ select m.data.title, json_serialize(m.data.cast) from movies m
+ where JSON_TextContains (data, '$.cast', 'fuzzy((shiya lebof)) AND fuzzy((stellen skarsguard))');
+
+ ```
+
+ Did we mention you can use an AND between words and phrases? You can use _AND_, _OR_ or _NOT_ within a CONTAINS or JSON_TextContains operator to do boolean searches. So here we're looking for fuzzy matches of the two names (the _fuzzy_ operator requires two sets of parentheses around phrases - but only one if a single word is used)
+
+ ![fuzzy match results](images/fuzzy-match.png)
+
+ Looking at the results, there is only one movie where Stellan Skarsgård and Shia LeBeouf are both in the cast.
+
+## Learn More
+
+* [How to Store, Query and Create JSON Documents in Oracle Database](https://blogs.oracle.com/sql/post/how-to-store-query-and-create-json-documents-in-oracle-databaseå))
+
+## Acknowledgements
+
+* **Author** - Roger Ford, Hermann Baer
+* **Contributors** - David Start, Ranjan Priyadarshi
+* **Last Updated By/Date** - Roger Ford, Database Product Manager, November 2023
\ No newline at end of file
diff --git a/23cfree/sql-23c-features/sql-23c-features.md b/23cfree/sql-23c-features/sql-23c-features.md
index f6fa1818b..f660f0602 100644
--- a/23cfree/sql-23c-features/sql-23c-features.md
+++ b/23cfree/sql-23c-features/sql-23c-features.md
@@ -32,9 +32,8 @@ This lab assumes you have:
## Task 1: Start SQL*Plus
To dive into these features, we'll be using SQL*Plus - an interactive and batch query tool that is installed with every Oracle Database installation. It has a command-line user interface.
-1. Your terminal should still be open since ORDS needs to be running. If the terminal has been closed, please return to the previous lab to restart ORDS. From the same terminal, enter this line:
+1. From the terminal, enter this line:
-
```
sqlplus hol23c/[your_password_here]@localhost:1521/freepdb1
```
@@ -46,7 +45,6 @@ To dive into these features, we'll be using SQL*Plus - an interactive and batch
```
-
```
sqlplus hol23c/[your_password_here]@localhost:1521/freepdb1
```
@@ -57,7 +55,6 @@ To dive into these features, we'll be using SQL*Plus - an interactive and batch
```
-
## Task 2: FROM clause - now optional
An interesting feature introduced in Oracle Database 23c is optionality of FROM clause in SELECT statements. Up to this version the FROM clause was obligatory.
@@ -98,14 +95,12 @@ Oracle Database 23c introduces the new BOOLEAN datatype. This leverages the use
2. Let's fill our new table with data. The value `IS_SLEEPING` will be `NOT NULL` set to `FALSE` as default.
```
-
- ALTER TABLE TEST_BOOLEAN modify (IS_SLEEPING boolean NOT NULL);
+ ALTER TABLE TEST_BOOLEAN modify (IS_SLEEPING boolean NOT NULL);
Table altered.
```
```
-
- ALTER TABLE TEST_BOOLEAN modify (IS_SLEEPING default FALSE);
+ ALTER TABLE TEST_BOOLEAN modify (IS_SLEEPING default FALSE);
Table altered.
```
@@ -138,9 +133,10 @@ Oracle Database 23c introduces the new BOOLEAN datatype. This leverages the use
```
set linesize window
SELECT * FROM test_boolean;
-
+ ```
+ ```
NAME IS_SLEEPING
- ---------------------------------------------------------------------------------------------------- -----------
+ ------------------------------------------------------------------------------------------------ -----------
Mick FALSE
Keith FALSE
Ron TRUE
@@ -168,7 +164,7 @@ Oracle Database 23c introduces the new BOOLEAN datatype. This leverages the use
```
3. Similarly, we can use this feature to create tables, if they do not already exist. Let's go ahead and create that DEPT table.
- >NOTE: Any trailing numbers when pasting these into the terminal will not effect the command.select
+ >NOTE: Any trailing numbers when pasting these into the terminal will not effect the command.
```
@@ -343,7 +339,12 @@ This clause has been implemented long ago as a part of `EXECUTE IMMEDIATE` state
## Task 9: Joins in UPDATE and DELETE
You may update table data via joins - based on foreign table conditions. There is no need for sub selects or `IN` clause.
-1. For example, instead of using this statement prior to 23c:
+1. Let's take a look at the employee salary information from the research department.
+ ```
+ select e.sal, e.empno from emp e, dept d where e.deptno=d.deptno and d.dname='RESEARCH';
+ ```
+
+2. Now to update the salary information, prior to 23c we would need to use a nested statement:
```
UPDATE emp e set e.sal=e.sal*2
WHERE e.deptno in
@@ -360,6 +361,11 @@ You may update table data via joins - based on foreign table conditions. There i
5 rows updated.
```
+3. You can see the salary has been successfully updated.
+ ```
+ select e.sal, e.empno from emp e, dept d where e.deptno=d.deptno and d.dname='RESEARCH';
+ ```
+
## Task 10: Annotations, new metadata for database objects
Annotations are optional meta data for database objects. An annotation is either a name-value pair or name by itself. The name and optional value are freeform text fields. An annotation is represented as a subordinate element to the database object to which the annotation has been added. Supported schema objects include tables, views, materialized views, and indexes. With annotations you may store and retrieve metadata about a database objects. You can use it to customize business logic, user interfaces or provide metada to metatdata repositories. It can be added with CREATE or ALTER statement. - on table or column level.
@@ -376,10 +382,25 @@ With annotations you may store and retrieve metadata about a database objects. Y
annotations (display 'employee_table');
```
- Data Dictionary views such as `USER_ANNOTATIONS` and `USER_ANNOTATIONS_USAGE` can help to monitor the usage.
+
+ These will help to format the output.
```
- SELECT * FROM user_annotations_usage;
+ set lines 200;
+ set pages 200;
+ col object_name format a25;
+ col object_type format a15;
+ col annotation_name format a15;
+ col annotation_value format a15;
+ col column_name format a20;
+
+ ```
+
+2. Data Dictionary views such as `USER_ANNOTATIONS` and `USER_ANNOTATIONS_USAGE` can help to monitor the usage.
+ ```
+
+ SELECT object_name, object_type, column_name, annotation_name, annotation_value
+ FROM user_annotations_usage;
```
@@ -400,16 +421,32 @@ SQL Domains allow users to declare the intended usage for columns. They are data
sqlplus / as sysdba
```
-2. Now create the domain `yearbirth` and the table `person`.
+ Set the correct container.
```
-
- CREATE DOMAIN yearbirth as number(4)
+ alter session set container=FREEPDB1;
+ Session altered.
+ ```
+
+2. Grant privileges to our main user `hol23c` to create domains.
+ ```
+ grant db_developer_role to hol23c;
+ Grant succeeded.
+ ```
+ Connect to hol23c. Replace _`Welcome123`_ with the password you created in Lab 1.
+ ```
+ connect hol23c/Welcome123@localhost:1521/freepdb1
+ Connected.
+ ```
+
+3. Now create the domain `yearbirth` and the table `person`.
+ ```
+ CREATE DOMAIN yearbirth as number(4)
constraint check ((trunc(yearbirth) = yearbirth) and (yearbirth >= 1900))
display (case when yearbirth < 2000 then '19-' ELSE '20-' end)||mod(yearbirth, 100)
order (yearbirth -1900)
annotations (title 'yearformat');
-
- Table created.
+
+ Domain created.
```
```
@@ -428,20 +465,20 @@ SQL Domains allow users to declare the intended usage for columns. They are data
```
```
- Name Null? Type
- -------------------------------------------------------------------------- -------- ----------------------------
- ID NUMBER(5)
- NAME VARCHAR2(50)
- SALARY NUMBER
- PERSON_BIRTH NUMBER(4) SYS.YEARBIRTH
+ Name Null? Type
+ ----------------------------------------- -------- ----------------------------
+ ID NUMBER(5)
+ NAME VARCHAR2(50)
+ SALARY NUMBER
+ PERSON_BIRTH NUMBER(4) HOL23C.YEARBIRTH
```
-3. Now let's add data to our table.
+4. Now let's add data to our table.
```
INSERT INTO person values (1,'MARTIN',3000, 1988);
```
-4. With the new function `DOMAIN_DISPLAY` you can display the property.
+5. With the new function `DOMAIN_DISPLAY` you can display the property.
```
SELECT DOMAIN_DISPLAY(person_birth) FROM person;
```
@@ -452,50 +489,44 @@ SQL Domains allow users to declare the intended usage for columns. They are data
19-88
```
-5. Domain usage and Annotations can be monitored with data dictionary views.
- ```
- SELECT * FROM user_annotations_usage;
- ```
-
- ```
- OBJECT_NAME OBJECT_TYP COLUMN_NAME DOMAIN_NAM DOMAIN_OWN ANNOTATION_NAME ANNOTATION_VALUE
- --------------- ---------- --------------- ---------- ---------- -------------------- ----------------
- EMP_ANNOTATED TABLE DISPLAY employee_table
- PERSON TABLE DISPLAY person_table
- EMP_ANNOTATED TABLE EMPNO IDENTITY
- EMP_ANNOTATED TABLE EMPNO DISPLAY person_identity
- EMP_ANNOTATED TABLE EMPNO DETAILS person_info
- EMP_ANNOTATED TABLE SALARY DISPLAY person_salary
- EMP_ANNOTATED TABLE SALARY COL_HIDDEN
- YEARBIRTH DOMAIN TITLE yearformat
- PERSON TABLE PERSON_BIRTH YEARBIRTH SCOTT TITLE yearformat
- ```
-
-6. Let's see what that output looks like using SQLcl, the new Oracle command line tool. Notice instead of sqlplus we use sql.
+6. Domain usage and Annotations can be monitored with data dictionary views. First we'll set some formatting, then view `user_annotations_usage`.
```
- exit;
+ set lines 200;
+ set pages 200;
+ col object_name format a15;
+ col object_type format a12;
+ col annotation_name format a15;
+ col annotation_value format a20;
+ col column_name format a15;
+ col domain_name format a12;
+ col domain_owner format a12;
```
```
-
- sql / as sysdba
-
+ SELECT * FROM user_annotations_usage;
```
```
-
- SELECT * FROM user_annotations_usage;
-
- OBJECT_NAME OBJECT_TYPE COLUMN_NAME DOMAIN_NAME DOMAIN_OWNER ANNOTATION_NAME ANNOTATION_VALUE
- ______________ ______________ _______________ ______________ _______________ __________________ ___________________
- PERSON TABLE DISPLAY person_table
- YEARBIRTH DOMAIN YEARBIRTH TITLE yearformat
- PERSON TABLE PERSON_BIRTH YEARBIRTH SYS TITLE yearformat
-
+ OBJECT_NAME OBJECT_TYPE COLUMN_NAME DOMAIN_NAME DOMAIN_OWNER ANNOTATION_NAME ANNOTATION_VALUE
+ --------------- ------------ --------------- ------------ ------------ --------------- --------------------
+ EMP_ANNOTATED TABLE DISPLAY
+ employee_table
+ PERSON TABLE DISPLAY
+ person_table
+ EMP_ANNOTATED TABLE EMPNO IDENTITY
+ EMP_ANNOTATED TABLE EMPNO DISPLAY
+ person_identity
+ EMP_ANNOTATED TABLE EMPNO DETAILS
+ person_info
+ EMP_ANNOTATED TABLE SALARY DISPLAY
+ person_salary
+ EMP_ANNOTATED TABLE SALARY COL_HIDDEN
+ YEARBIRTH DOMAIN TITLE
+ yearformat
+ PERSON TABLE PERSON_BIRTH YEARBIRTH HOL23C TITLE
+ yearformat
```
-You may now **proceed to the next lab**.
-
## Learn More
* [SQL Language Reference](https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/index.html)
@@ -510,4 +541,4 @@ You may now **proceed to the next lab**.
## Acknowledgements
* **Author** - Ulrike Schwinn, Distinguished Data Management Expert; Hope Fisher, Program Manager
* **Contributors** - Witold Swierzy, Data Management Expert; Stephane Duprat, Technical End Specialist
-* **Last Updated By/Date** - Hope Fisher, Aug 2023
+* **Last Updated By/Date** - Hope Fisher, Oct 2023
\ No newline at end of file
diff --git a/23cfree/sql-domains/sql-domains.md b/23cfree/sql-domains/sql-domains.md
index 96d501a83..56b4f27ba 100644
--- a/23cfree/sql-domains/sql-domains.md
+++ b/23cfree/sql-domains/sql-domains.md
@@ -51,6 +51,13 @@ This lab assumes you have:
You may consult the [documentation](https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/index.html) to get a detailed description of the different parts of SQL domain syntax.
2. To get an idea how to use it, let's create a simple example - an email domain - and use it in a person table.
+ If you closed to your terminal, connect again as hol23c again. Remember we're using Welcome123 as the password, but change the values to match your own here.
+ ```
+
+ sqlplus hol23c/Welcome123@localhost:1521/freepdb1
+
+ ```
+ Now let's create that domain.
>Note: As a reminder, and trailing numbers when pasting into terminal will not effect command output.
```
@@ -62,7 +69,7 @@ This lab assumes you have:
```
- We now have a domain called `myemail_domain`. The check constraint `EMAIL\_C` examines if the column stores a valid email, `DISPLAY` specifies how to convert the domain column for display purposes. You may use the SQL function `DOMAIN_DISPLAY` on the given column to display it.
+ We now have a domain called `myemail_domain`. The check constraint `EMAIL_C` examines if the column stores a valid email, `DISPLAY` specifies how to convert the domain column for display purposes. You may use the SQL function `DOMAIN_DISPLAY` on the given column to display it.
3. Now let's use it in the table person.
```
@@ -84,7 +91,7 @@ This lab assumes you have:
We now have a table called `person`. As you can see, you may also use annotations in combination with SQL domains. If you want more information on annotations, you find explanation and examples in [Annotations - The new metadata in 23c](https://blogs.oracle.com/coretec/post/annotations-the-new-metadata-in-23c).
-4. Now let's insert additional rows with valid data.
+4. Now insert additional rows with valid data.
```
@@ -99,11 +106,11 @@ This lab assumes you have:
```
commit;
- Commit complete.
+ Commit complete.
```
-5. Let's try to insert invalid data.
+5. Here's an insert example with invalid data.
```
INSERT INTO person values (1,'Schulte',3000, 'user-schulte%gmx.net');
```
@@ -113,7 +120,7 @@ This lab assumes you have:
ORA-11534: check constraint (SYS.SYS_C008254) due to domain constraint
SYS.EMAIL_C of domain SYS.MYEMAIL_DOMAIN violated
```
- The number of your constraint may vary slightly from the `SYS_C008254` shown here, but the error is the same.
+ The number of your constraint may vary slightly from the `SYS_C008254` shown here, but the error is the same. This is our domain constraint at work.
6. Now let's query the table PERSON.
```
@@ -129,6 +136,7 @@ This lab assumes you have:
1 Schwinn 1000 UserSchwinn@oracle.com
1 King 1000 user-king@aol.com
```
+
## Task 2: Monitor SQL domains
1. There are different possibilities to monitor SQL domains. For example using SQL*Plus `DESCRIBE` already displays columns and associated domain and Null constraint.
@@ -142,14 +150,12 @@ This lab assumes you have:
P_ID NUMBER(5)
P_NAME VARCHAR2(50)
P_SAL NUMBER
- P_EMAIL NOT NULL VARCHAR2(100) SYS.MYEMAIL_DOMAIN
+ P_EMAIL NOT NULL VARCHAR2(100) HOL23C.MYEMAIL_DOMAIN
```
2. As previously mentioned, there are new domain functions you may use in conjunction with the table columns to get more information about the domain properties. `DOMAIN_NAME` for example returns the qualified domain name of the domain that the argument is associated with, `DOMAIN_DISPLAY` returns the domain display expression for the domain that the argument is associated with. More information can be found in the [documentation](https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/index.html).
```
- col p_name format a25;
- ```
- ```
- col DISPLAY format a25;
+ col p_name format a30;
+ col DISPLAY format a25;
```
```
SELECT p_name, domain_display(p_email) "Display" FROM person;
@@ -168,7 +174,11 @@ This lab assumes you have:
Here are some examples:
```
- col owner format a15;
+
+ col owner format a15;
+ col name format a30;
+ set pagesize 100;
+
```
```
SELECT owner, name, data_display FROM user_domains;
@@ -179,8 +189,8 @@ This lab assumes you have:
--------------- ------------------------------
DATA_DISPLAY
------------------------------------------------------
- SYS MYEMAIL_DOMAIN
- SUBSTR(myemail_domain, INSTR(myemail_domain, '@') + 1)
+ HOL23C MYEMAIL_DOMAIN
+ SUBSTR(myemail_domain, instr(myemail_domain, '@') + 1)
```
```
@@ -205,7 +215,9 @@ This lab assumes you have:
4. But what about the "good old" package `DBMS_METADATA` to get the `DDL` command?
Let's try `GET_DDL` and use `SQL_DOMAIN` as an object_type argument.
-
+ ```
+ set long 10000;
+ ```
```
SELECT dbms_metadata.get_ddl('SQL_DOMAIN', 'MYEMAIL_DOMAIN') FROM dual;
```
@@ -224,6 +236,20 @@ This lab assumes you have:
In addition, to make it easier for you to start with Oracle provides built-in domains you can use directly on table columns - for example, email, ssn, and credit_card. You find a list of them with names, allowed values and description in the documentation.
+1. First, we'll connect as the sysdba.
+ ```
+
+ connect / as sysdba;
+
+ Connected.
+ ```
+ ```
+
+ alter session set container=FREEPDB1;
+
+ Session altered.
+ ```
+
1. Another way to get this information is to query `ALL_DOMAINS` and filter on owner `SYS`. Then you will receive the built-in domains.
```
SELECT name FROM all_domains where owner='SYS';
@@ -257,11 +283,19 @@ In addition, to make it easier for you to start with Oracle provides built-in do
?^_`{|}~-]+(\.[A-Za-z0-9!#$%&*+=?^_`{|}~-]+)*)@(([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-
9]([a-zA-Z0-9-]*[a-zA-Z0-9])?)$')) ENABLE
```
-3. Now let's re-create our table `PERSON`.
+3. Now let's connect as hol23c and re-create our table `PERSON`.
+ ```
+
+ connect hol23c/Welcome123@localhost:1521/freepdb1
+
+ Connected.
+ ```
+
```
DROP TABLE IF EXISTS person;
+ Table dropped.
```
```
@@ -273,9 +307,10 @@ In addition, to make it easier for you to start with Oracle provides built-in do
)
annotations (display 'person_table');
+ Table created.
```
- Keep in mind that we need to adjust the length of the column `P_EMAIL` to **4000** - otherwise you will receive the following error:
+ > Keep in mind that we need to adjust the length of the column `P_EMAIL` to **4000** - otherwise you will receive the following error:
```
CREATE TABLE person
( p_id number(5),
@@ -285,7 +320,6 @@ In addition, to make it easier for you to start with Oracle provides built-in do
)
annotations (display 'person_table');
```
-
```
CREATE TABLE person
*
@@ -304,7 +338,7 @@ In addition, to make it easier for you to start with Oracle provides built-in do
1 row created.
```
- We'll need to format our entries to avoid error.
+ We'll need to format our entries to avoid error. Here's an example of such an error:
```
INSERT INTO person values (1,'Walter',1000,'user-walter@t_online.de')
*
@@ -312,7 +346,7 @@ In addition, to make it easier for you to start with Oracle provides built-in do
ORA-11534: check constraint (SCOTT.SYS_C008255) due to domain constraint SYS.SYS_DOMAIN_C002 of domain SYS.EMAIL_D
violated
```
- The email with the sign '_' is not a valid entry, so we need to change it to '-'.
+ The email with the underscore sign '_' is not a valid entry, so we need to change it to a hyphen '-'.
```
INSERT INTO person values (1,'Walter',1000,'user-walter@t-online.de');
1 row created.
@@ -352,17 +386,20 @@ The 23c Oracle database supports not only the JSON datatype but also **JSON sche
}
}' ;
+ Domain created.
```
```
DROP TABLE IF EXISTS person;
+ Table dropped.
```
```
CREATE TABLE IF NOT EXISTS person (id NUMBER,
p_record JSON DOMAIN p_recorddomain);
+ Table created.
```
3. Now we insert valid data.
@@ -380,6 +417,7 @@ The 23c Oracle database supports not only the JSON datatype but also **JSON sche
}
}');
+ 1 row created.
```
4. The next record is not a valid entry.
@@ -401,10 +439,8 @@ The 23c Oracle database supports not only the JSON datatype but also **JSON sche
Automatically a check constraint to validate the schema is created. Query `USER_DOMAIN_CONSTRAINTS` to verify this.
```
- set long 400;
- ```
- ```
- col name format a20;
+ set long 1000;
+ col name format a30;
```
```
SELECT name, generated, constraint_type, search_condition
@@ -443,4 +479,4 @@ You have now **completed this workshop**.
## Acknowledgements
* **Author** - Ulrike Schwinn, Distinguished Data Management Expert; Hope Fisher, Program Manager
-* **Last Updated By/Date** - Hope Fisher, Aug 2023
\ No newline at end of file
+* **Last Updated By/Date** - Hope Fisher, Oct 2023
\ No newline at end of file
diff --git a/23cfree/sql-extended/sql-extended.md b/23cfree/sql-extended/sql-extended.md
index 1454ff8fb..5f4dfe6f3 100644
--- a/23cfree/sql-extended/sql-extended.md
+++ b/23cfree/sql-extended/sql-extended.md
@@ -73,9 +73,9 @@ This lab assumes you have:
INSERT INTO driver_race_map
VALUES(3, 204, 103, 1),
- VALUES(4, 204, 104, 2),
- VALUES(9, 204, 106, 3),
- VALUES(10, 204, 105, 4);
+ (4, 204, 104, 2),
+ (9, 204, 106, 3),
+ (10, 204, 105, 4);
COMMIT;
diff --git a/23cfree/workshops/desktop-json-enhancements/manifest.json b/23cfree/workshops/desktop-json-enhancements/manifest.json
index 9b208b3a2..a8bec7cb4 100644
--- a/23cfree/workshops/desktop-json-enhancements/manifest.json
+++ b/23cfree/workshops/desktop-json-enhancements/manifest.json
@@ -33,6 +33,11 @@
"description": "Use SQL to work with JSON",
"filename": "./../../json-sql/json-sql.md"
},
+ {
+ "title": "Lab 5: Work with JSON Search Indexes",
+ "description": "Work with JSON Search Indexes",
+ "filename": "./../../json-search/json-search.md"
+ },
{
"title": "Need Help?",
"description": "Solutions to Common Problems and Directions for Receiving Live Help",
diff --git a/23cfree/workshops/desktop-sql-domains/manifest.json b/23cfree/workshops/desktop-sql-domains/manifest.json
index 1a542c0fe..fc74e21db 100644
--- a/23cfree/workshops/desktop-sql-domains/manifest.json
+++ b/23cfree/workshops/desktop-sql-domains/manifest.json
@@ -36,10 +36,6 @@
"title": "Need Help?",
"description": "Solutions to Common Problems and Directions for Receiving Live Help",
"filename":"https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md"
- },
- {
- "title": "Oracle CloudWorld 2023 - Get Help",
- "filename": "https://oracle-livelabs.github.io/common/support/ocwsupportlab/ocwsupportlab.md"
}
]
}
diff --git a/23cfree/workshops/ocw23-sandbox-sql-domains/manifest.json b/23cfree/workshops/ocw23-sandbox-sql-domains/manifest.json
index b99b0d39a..96c1aaf92 100644
--- a/23cfree/workshops/ocw23-sandbox-sql-domains/manifest.json
+++ b/23cfree/workshops/ocw23-sandbox-sql-domains/manifest.json
@@ -9,12 +9,12 @@
"filename": "../../introduction/sql-domains-intro.md"
},
{ "title": "Get Started",
- "description": "Prerequisites for LiveLabs (Oracle-owned tenancies). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
+ "description": "Prerequisites for LiveLabs (Oracle-owned tenancies). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login-livelabs2.md"
},
{
"title": "Lab 1: Setup User",
- "description": "Change user password and start ORDS",
+ "description": "Change user password",
"type": "livelabs",
"filename": "../../change-pw/change-pw-sql.md"
},
diff --git a/23cfree/workshops/sandbox-js-generic/manifest.json b/23cfree/workshops/sandbox-js-generic/manifest.json
index bb7f21cef..d21756173 100644
--- a/23cfree/workshops/sandbox-js-generic/manifest.json
+++ b/23cfree/workshops/sandbox-js-generic/manifest.json
@@ -51,10 +51,6 @@
"title": "Need Help?",
"description": "Instructions for getting help",
"filename":"https://raw.githubusercontent.com/oracle-livelabs/common/main/labs/need-help/need-help-freetier.md"
- },
- {
- "title": "Oracle CloudWorld 2023 - Support",
- "filename": "https://oracle-livelabs.github.io/common/support/ocwsupportlab/ocwsupportlab.md"
}
]
}
diff --git a/23cfree/workshops/sandbox-json-enhancements/manifest.json b/23cfree/workshops/sandbox-json-enhancements/manifest.json
index d2cbb9d74..ae4298c04 100644
--- a/23cfree/workshops/sandbox-json-enhancements/manifest.json
+++ b/23cfree/workshops/sandbox-json-enhancements/manifest.json
@@ -33,6 +33,11 @@
"description": "Use SQL to work with JSON",
"filename": "./../../json-sql/json-sql.md"
},
+ {
+ "title": "Lab 5: Work with JSON Search Indexes",
+ "description": "Work with JSON Search Indexes",
+ "filename": "./../../json-search/json-search.md"
+ },
{
"title": "Need Help?",
"description": "Solutions to Common Problems and Directions for Receiving Live Help",
diff --git a/23cfree/workshops/sandbox-owc-duality/manifest.json b/23cfree/workshops/sandbox-owc-duality/manifest.json
index dc33ddcaf..7571c1533 100644
--- a/23cfree/workshops/sandbox-owc-duality/manifest.json
+++ b/23cfree/workshops/sandbox-owc-duality/manifest.json
@@ -44,9 +44,9 @@
"filename": "../../rest-duality/rest-duality.md"
},
{
- "title": "Oracle CloudWorld 2023 - Support",
- "description": "Template to link to Need Help lab at the end of workshop. Change 'CHANGE-ME' in link below to need-help-livelabs.md or need-help-freetier.md",
- "filename":"https://oracle-livelabs.github.io/common/support/ocwsupportlab/ocwsupportlab.md"
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename":"https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md"
}
]
}
diff --git a/23cfree/workshops/sandbox-sql-domains/manifest.json b/23cfree/workshops/sandbox-sql-domains/manifest.json
index 8e8a76ecd..9da7308f7 100644
--- a/23cfree/workshops/sandbox-sql-domains/manifest.json
+++ b/23cfree/workshops/sandbox-sql-domains/manifest.json
@@ -9,21 +9,20 @@
"filename": "../../introduction/sql-domains-intro.md"
},
{ "title": "Get Started",
- "description": "Prerequisites for LiveLabs (Oracle-owned tenancies). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
+ "description": "Prerequisites for LiveLabs (Oracle-owned tenancies). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login-livelabs2.md"
},
{
"title": "Lab 1: Setup User",
- "description": "Change user password and start ORDS",
+ "description": "Change user password",
"type": "livelabs",
"filename": "../../change-pw/change-pw-sql.md"
},
{
"title": "Lab 2: Power Up with 23c SQL Features",
"description": "Power up with new features in Oracle Database 23c",
- "type": "livelabs",
- "filename": "../../sql-23c-features/sql-23c-features.md",
- "type": "sql-features"
+ "type": "sql-features",
+ "filename": "../../sql-23c-features/sql-23c-features.md"
},
{
"title": "Lab 3: Leverage SQL Domains",
@@ -37,4 +36,4 @@
"filename":"https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md"
}
]
-}
+}
\ No newline at end of file
diff --git a/23cfree/workshops/tenancy-json-enhancements/manifest.json b/23cfree/workshops/tenancy-json-enhancements/manifest.json
index da961fb96..a56a2b562 100644
--- a/23cfree/workshops/tenancy-json-enhancements/manifest.json
+++ b/23cfree/workshops/tenancy-json-enhancements/manifest.json
@@ -49,6 +49,11 @@
"description": "Use SQL to work with JSON",
"filename": "./../../json-sql/json-sql.md"
},
+ {
+ "title": "Lab 8: Work with JSON Search Indexes",
+ "description": "Work with JSON Search Indexes",
+ "filename": "./../../json-search/json-search.md"
+ },
{
"title": "Need Help?",
"description": "Solutions to Common Problems and Directions for Receiving Live Help",
diff --git a/create-tables-nosql-database/run-sample-application/run-sample-application.md b/create-tables-nosql-database/run-sample-application/run-sample-application.md
index efff991ee..562c5014a 100644
--- a/create-tables-nosql-database/run-sample-application/run-sample-application.md
+++ b/create-tables-nosql-database/run-sample-application/run-sample-application.md
@@ -39,7 +39,7 @@ This workshop contains different language implementation in the form of differen
com.oracle.nosql.sdknosqldriver
- 5.2.27
+
@@ -63,7 +63,7 @@ This workshop contains different language implementation in the form of differen
```
-**Note:** The latest SDK can be found here [Oracle NoSQL Database SDK For Java](https://mvnrepository.com/artifact/com.oracle.nosql.sdk/nosqldriver). You can update the `pom.xml` file with this latest SDK version number.
+**Note:** The latest SDK can be found here [Oracle NoSQL Database SDK For Java](https://mvnrepository.com/artifact/com.oracle.nosql.sdk/nosqldriver). Please make sure to replace the placeholder for the version of the Oracle NoSQL Java SDK in the `pom.xml` file with the exact SDK version number.
@@ -73,6 +73,10 @@ This workshop contains different language implementation in the form of differen
```
pip3 install borneo
```
+3. If you are using the Oracle NoSQL Database cloud service you will also need to install the oci package:
+```
+ pip3 install oci
+```
@@ -101,35 +105,35 @@ This workshop contains different language implementation in the form of differen
1. Open the [Node.js Download](https://nodejs.org/en/) and download Node.js for your operating system. Ensure that Node Package Manager (npm) is installed along with Node.js.
2. Install the node SDK for Oracle NoSQL Database.
- ```
-
- npm install oracle-nosqldb
-
- ```
- With the above command, npm will create node_modules directory in the current directory and install it there.
+ ```
+
+ npm install oracle-nosqldb
+
+ ```
- Another option is to install the SDK globally:
+ With the above command, npm will create node_modules directory in the current directory and install it there.
- ```
-
- npm install -g oracle-nosqldb
-
- ```
- You can do one of the above options depending on the permissions you have.
+ Another option is to install the SDK globally:
+
+ ```
+
+ npm install -g oracle-nosqldb
+
+ ```
+You can do one of the above options depending on the permissions you have.
You can add the SDK NuGet Package as a reference to your project by using .Net CLI:
-1. Go to your project directory
+1. Make sure you have [.NET](https://dotnet.microsoft.com/en-us/download) installed in your system. You can add the SDK NuGet Package as a reference to your project by using .Net CLI:
+2. Run the following command to create your project directory.
```
-cd
-//This will create a project called HelloWorld
- dotnet new console -o HelloWorld
+dotnet new console -o HelloWorld
```
-2. Add the SDK package to your project.
+2. Go to your project directory. Add the SDK package to your project.
```
dotnet add package Oracle.NoSQL.SDK
@@ -144,27 +148,28 @@ cd
2. Review the sample application. You can access the [JavaAPI Reference Guide](https://docs.oracle.com/en/cloud/paas/nosql-cloud/csnjv/index.html) to reference Java classes, methods, and interfaces included in this sample application.
- Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.java](https://objectstorage.us-ashburn-1.oraclecloud.com/p/qCpBRv5juyWwIF4dv9h98YWCDD50574Y6OwsIHhEMgI/n/c4u03/b/data-management-library-files/o/HelloWorld.java), replace the placeholder of the compartment in the function ```setDefaultCompartment``` with the OCID of your compartment. Save the file and close it.
+ Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.java](https://objectstorage.us-ashburn-1.oraclecloud.com/p/qCpBRv5juyWwIF4dv9h98YWCDD50574Y6OwsIHhEMgI/n/c4u03/b/data-management-library-files/o/HelloWorld.java), replace the placeholder of the compartment in the function ```setDefaultCompartment``` with the OCID of your compartment. Replace the placeholder for region with the name of your region. Save the file and close it.
3. From your home directory, navigate to ".oci" directory.
- ```
-
- cd ~
- cd .oci
-
- ```
+```
+
+cd ~
+cd .oci
+
+```
+
Use `vi` or `nano` or any text editor to create a file named `config` in the `.oci` directory.
- ```
-
- [DEFAULT]
- user=USER-OCID
- fingerprint=FINGERPRINT-VALUE
- tenancy=TENANCY-OCID
- key_file=
-
- ```
+```
+
+[DEFAULT]
+user=USER-OCID
+fingerprint=FINGERPRINT-VALUE
+tenancy=TENANCY-OCID
+key_file=
+
+```
Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
![View config file](images/config-file.png)
When `SignatureProvider` is constructed without any parameters, the default [Configuration File](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm) is located in the `~/.oci/config` directory.
@@ -187,21 +192,21 @@ $ mvn exec:java -Dexec.mainClass=HelloWorld
2. Review the sample application. You can access the [Python API Reference Guide](https://nosql-python-sdk.readthedocs.io/en/latest/api.html) to reference Python classes and methods included in this sample application.
- Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.py](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.py), replace the placeholder of the compartment in the function ```set_default_compartment``` with the OCID of your compartment. Save the file and close it.
+ Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.py](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.py), replace the placeholder of the compartment in the function ```set_default_compartment``` with the OCID of your compartment. Replace the placeholder for region with the name of your region. Save the file and close it.
3. From your home directory, navigate to ".oci" directory. Create a file named `config` in the `.oci` directory. Add OCID, tenancy ID, fingerprint & key credentials in the `config` file.
- ```
-
- [DEFAULT]
- user=USER-OCID
- fingerprint=FINGERPRINT-VALUE
- tenancy=TENANCY-OCID
- key_file=
-
- ```
- Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
- ![View config file](images/config-file.png)
+```
+
+[DEFAULT]
+user=USER-OCID
+fingerprint=FINGERPRINT-VALUE
+tenancy=TENANCY-OCID
+key_file=
+
+```
+Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
+![View config file](images/config-file.png)
4. Execute the sample application:
Open the Command Prompt, and navigate to the directory where you saved the `HelloWorld.py` program.
@@ -209,7 +214,7 @@ $ mvn exec:java -Dexec.mainClass=HelloWorld
```
- python HelloWorld.py
+ python3 HelloWorld.py
```
You get the following output:
@@ -233,21 +238,20 @@ $ mvn exec:java -Dexec.mainClass=HelloWorld
2. Review the sample application. You can access the [Go API docs](https://pkg.go.dev/github.com/oracle/nosql-go-sdk/nosqldb?utm_source=godoc) to reference Go classes and methods included in this sample application.
- Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.go](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.go) , replace the placeholder of the compartment in the constructor of ```NewSignatureProviderFromFile``` with the OCID of your compartment. Save the file and close it.
+ Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.go](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.go) , replace the placeholder of the compartment in the constructor of ```NewSignatureProviderFromFile``` with the OCID of your compartment. Replace the placeholder for region with the name of your region. Save the file and close it.
-3.From your home directory, navigate to ".oci" directory. Create a file named `config` in the `.oci` directory. Add OCID, tenancy ID, fingerprint & key credentials in the `config` file.
-
- ```
-
- [DEFAULT]
- user=USER-OCID
- fingerprint=FINGERPRINT-VALUE
- tenancy=TENANCY-OCID
- key_file=
-
- ```
- Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
- ![View config file](images/config-file.png)
+3. From your home directory, navigate to ".oci" directory. Create a file named `config` in the `.oci` directory. Add OCID, tenancy ID, fingerprint & key credentials in the `config` file.
+```
+
+[DEFAULT]
+user=USER-OCID
+fingerprint=FINGERPRINT-VALUE
+tenancy=TENANCY-OCID
+key_file=
+
+```
+Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
+![View config file](images/config-file.png)
4. Execute the sample application:
Initialize a new module for the example program.
@@ -257,6 +261,27 @@ $ mvn exec:java -Dexec.mainClass=HelloWorld
go mod init example.com/HelloWorld
```
+ You see the following output:
+ ```
+
+ go: creating new go.mod: module example.com/HelloWorld
+ go: to add module requirements and sums:
+ go mod tidy
+
+ ```
+ Run go mod tidy and the various modules are added to the project
+ ```
+
+ go mod tidy
+ go: finding module for package github.com/oracle/nosql-go-sdk/nosqldb/auth/iam
+ go: finding module for package github.com/oracle/nosql-go-sdk/nosqldb
+ go: finding module for package github.com/oracle/nosql-go-sdk/nosqldb/common
+ go: found github.com/oracle/nosql-go-sdk/nosqldb in github.com/oracle/nosql-go-sdk v1.4.0
+ go: found github.com/oracle/nosql-go-sdk/nosqldb/auth/iam in github.com/oracle/nosql-go-sdk v1.4.0
+ go: found github.com/oracle/nosql-go-sdk/nosqldb/common in github.com/oracle/nosql-go-sdk v1.4.0
+
+ ```
+
Build the HelloWorld application.
```
@@ -280,60 +305,66 @@ $ mvn exec:java -Dexec.mainClass=HelloWorld
2. Review the sample application. You can access the [Node.js API Reference Guide](https://oracle.github.io/nosql-node-sdk/index.html) to reference Node.js classes and methods included in this sample application.
- Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.js](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.js), replace the placeholder of the compartment in the ```NoSQLClient``` constructor with the OCID of your compartment. Save the file and close it.
+ Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.js](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.js), replace the placeholder of the compartment in the ```NoSQLClient``` constructor with the OCID of your compartment. Replace the placeholder for region with the name of your region. Save the file and close it.
3. From your home directory, navigate to ".oci" directory. Create a file named `config` in the `.oci` directory. Add OCID, tenancy ID, fingerprint & key credentials in the `config` file.
- ```
-
- [DEFAULT]
- user=USER-OCID
- fingerprint=FINGERPRINT-VALUE
- tenancy=TENANCY-OCID
- key_file=
-
- ```
- Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
- ![View config file](images/config-file.png)
+```
+
+[DEFAULT]
+user=USER-OCID
+fingerprint=FINGERPRINT-VALUE
+tenancy=TENANCY-OCID
+key_file=
+
+```
+Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
+![View config file](images/config-file.png)
4. Execute the Sample Application
Open the Command Prompt, and navigate to the directory where you saved the `HelloWorld.js` program.
Execute the HelloWorld program.
- ```
-
- node HelloWorld.js
-
- ```
+ ```
+
+ node HelloWorld.js
+
+ ```
*Note: In the main method of `HelloWorld.js`, the `dropTable(handle)` is commented out to allow you to see the result of creating the tables in the Oracle Cloud Console.*
1. Download the provided [HelloWorld.cs](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.cs) file and move it to your home directory.
2. Review the sample application. You can access the [.NET API Reference Guide](https://oracle.github.io/nosql-dotnet-sdk/index.html) to reference .NET classes and methods included in this sample application.
- Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.cs](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.cs), replace the placeholder of the compartment in the ```NoSQLClient``` constructor with the OCID of your compartment. Also modify the region parameter to your home region in the code ( For example if your home region is Ashburn set the region parameter as ```Region = Region.US_ASHBURN_1```). Save the file and close it.
+ Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped to that compartment. When authenticated as a specific user, your tables are managed in the root compartment of your tenancy unless otherwise specified. It is recommended not to create tables in the "root" compartment, but to create them in your own compartment created under "root". Edit the code [HelloWorld.cs](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/HelloWorld.cs), replace the placeholder of the compartment in the ```NoSQLClient``` constructor with the OCID of your compartment. Replace the placeholder for region with the name of your region. Save the file and close it.
3. From your home directory, navigate to ".oci" directory. Create a file named `config` in the `.oci` directory. Add OCID, tenancy ID, fingerprint & key credentials in the `config` file.
- ```
-
- [DEFAULT]
- user=USER-OCID
- fingerprint=FINGERPRINT-VALUE
- tenancy=TENANCY-OCID
- key_file=
-
- ```
- Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
- ![View config file](images/config-file.png)
-
-4. Go to your project directory. Under this directory, you will see the example source code ```Program.cs```. Overwrite the content of this file with the content of ```HelloWorld.cs```.
+```
+
+[DEFAULT]
+user=USER-OCID
+fingerprint=FINGERPRINT-VALUE
+tenancy=TENANCY-OCID
+key_file=
+
+```
+Replace [USER-OCID] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five) with the value you copied on your note pad, FINGERPRINT-VALUE with your API key fingerprint, TENANCY-OCID with your [tenancy OCID](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five). The [key_file] (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#How) is the private key that you generated. You should have noted these values in a text file as you've been working through this workshop. Use the values recorded from Lab 1.
+![View config file](images/config-file.png)
+
+4. Go to your project directory. you will see the example source code ```Program.cs```. Remove this file .
+```
+
+ rm Program.cs
+
+```
Build and run your project as shown below.
-*Note: You have multiple dotnet target frameworks which are supported. Currently the supported frameworks are .NET Core 3.1 and .NET 5.0, so you must specify the target framework to use. The command below will automatically download and install Oracle NoSQL Database SDK package as a dependency of your project.*
+
+*Note: You have multiple dotnet target frameworks which are supported. Currently the supported frameworks are .NET 7.0 and higher.*
```
- dotnet run -f net5.0
+dotnet run
```
*Note: In the RunBasicExample method of `HelloWorld.cs`, the section to drop table is commented out to allow you to see the result of creating the tables in the Oracle Cloud Console.*
@@ -382,4 +413,4 @@ This application accesses Oracle NoSQL Database Cloud Service, but most likely y
## Acknowledgements
* **Author** - Dave Rubin, Senior Director, NoSQL and Embedded Database Development and Michael Brey, Director, NoSQL Product Development
* **Contributors** - Jaden McElvey, Technical Lead - Oracle LiveLabs Intern
-* **Last Updated By/Date** -Vandana Rajamani, Database User Assistance, February 2023
+* **Last Updated By/Date** -Vandana Rajamani, Database User Assistance, January 2024
diff --git a/datapump-x-platform-migration/00-prepare-setup/00-prepare-setup.md b/datapump-x-platform-migration/00-prepare-setup/00-prepare-setup.md
new file mode 100644
index 000000000..ae0bec8d1
--- /dev/null
+++ b/datapump-x-platform-migration/00-prepare-setup/00-prepare-setup.md
@@ -0,0 +1,65 @@
+# Prepare Setup Daniel Was Here
+
+## Introduction
+
+In this lab, you will download the Oracle Resource Manager (ORM) stack zip file needed to setup the resource needed to run this workshop. This workshop requires a compute instance and a Virtual Cloud Network (VCN).
+
+Estimated Time: 15 minutes
+
+### Objectives
+
+- Download ORM stack
+- Configure an existing Virtual Cloud Network (VCN)
+
+### Prerequisites
+
+This lab assumes you have:
+
+- An Oracle Cloud account
+
+## Task 1: Download Oracle Resource Manager (ORM) stack zip file
+
+1. Click on the link below to download the Resource Manager zip file you need to build your environment: [xtts.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/upgrade-and-patching/xtts.zip)
+
+2. Save in your downloads folder.
+
+We strongly recommend using this stack to create a self-contained/dedicated VCN with your instance(s). Skip to *Step 3* to follow our recommendations. If you would rather use an exiting VCN then proceed to the next step as indicated below to update your existing VCN with the required Egress rules.
+
+## Task 2: Adding security rules to an existing VCN
+
+This workshop requires a certain number of ports to be available, a requirement that can be met by using the default ORM stack execution that creates a dedicated VCN. In order to use an existing VCN the following ports should be added to Egress rules
+
+| Port |Description |
+| :------------- | :------------------------------------ |
+| 22 | SSH |
+| 6080 | Remote Desktop noVNC () |
+
+1. Go to *Networking >> Virtual Cloud Networks*
+
+2. Choose your network
+
+3. Under Resources, select Security Lists
+
+4. Click on Default Security Lists under the Create Security List button
+
+5. Click Add Ingress Rule button
+
+6. Enter the following:
+ - Source CIDR: 0.0.0.0/0
+ - Destination Port Range: *Refer to above table*
+
+7. Click the Add Ingress Rules button
+
+## Task 3: Setup compute
+
+Using the details from the two steps above, proceed to the lab *Environment Setup* to setup your workshop environment using Oracle Resource Manager (ORM) and one of the following options:
+ - Create Stack: *Compute + Networking*
+ - Create Stack: *Compute only* with an existing VCN where security lists have been updated as per *Step 2* above
+
+You may now *proceed to the next lab*.
+
+## Acknowledgements
+
+* **Author** - Rene Fontcha, LiveLabs Platform Lead, NA Technology
+* **Contributors** - Meghana Banka, Rene Fontcha, Narayanan Ramakrishnan
+* **Last Updated By/Date** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, January 2021
diff --git a/datapump-x-platform-migration/workshops/freetier/manifest.json b/datapump-x-platform-migration/workshops/freetier/manifest.json
index 01dec785e..11e82d843 100644
--- a/datapump-x-platform-migration/workshops/freetier/manifest.json
+++ b/datapump-x-platform-migration/workshops/freetier/manifest.json
@@ -12,7 +12,19 @@
"title": "Get Started",
"description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
- },
+ },
+ {
+ "title": "Prepare Setup",
+ "description": "How to download your ORM stack and update security rules for an existing VCN",
+ "publisheddate": "09/28/2020",
+ "filename": "../../00-prepare-setup/00-prepare-setup.md"
+ },
+ {
+ "title": "Environment Setup",
+ "description": "How to provision the workshop environment and connect to it",
+ "publisheddate": "06/30/2020",
+ "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc.md"
+ },
{
"title": "Lab 1: Prepare the Target",
"description": "Adding a PDB to the Target CDB",
diff --git a/db-quickstart/db-connect-sqldev-web/db-connect-sqldev-web.md b/db-quickstart/db-connect-sqldev-web/db-connect-sqldev-web.md
deleted file mode 100644
index 1eee216fe..000000000
--- a/db-quickstart/db-connect-sqldev-web/db-connect-sqldev-web.md
+++ /dev/null
@@ -1,54 +0,0 @@
-# Connect to the Database Using SQL Worksheet
-
-## Introduction
-
-In this lab, you will connect to the database using SQL Worksheet, a browser-based tool that is easily accessible from the Autonomous Data Warehouse or Autonomous Transaction Processing console.
-
-Estimated lab time: 5 minutes
-
-### Objectives
-
-- Learn how to connect to your new autonomous database using SQL Worksheet
-
-### Prerequisites
-
-- This lab requires completion of the prior labs in this workshop: **Get Started** and **Provision an Autonomous Database**, in the Contents menu on the left.
-
-## Task: Connect with SQL Worksheet
-
-Although you can connect to your autonomous database from local PC desktop tools like Oracle SQL Developer, you can conveniently access the browser-based SQL Worksheet directly from your Autonomous Data Warehouse or Autonomous Transaction Processing console.
-
-1. In your database's details page, click the **Database Actions** button.
-
- ![Click the Database Actions button](./images/click-database-actions-button.png " ")
-
-2. A sign-in page opens for Database Actions. For this lab, simply use your database instance's default administrator account, **Username - admin**, and click **Next**.
-
- **Note:** The first time you open Database Actions, you will be prompted for username and password. Subsequently, when you open Database Actions you may not be prompted for username and password.
-
- ![Enter the admin username.](./images/enter-admin-username.png " ")
-
-3. Enter the Administrator **Password** you specified when creating the database. Click **Sign in**.
-
- ![Enter the admin password.](./images/enter-admin-password.png " ")
-
-4. The Database Actions page opens. In the **Development** box, click **SQL** to open a SQL Worksheet.
-
- ![Click on SQL.](./images/click-sql.png " ")
-
-5. The first time you open SQL Worksheet, a series of pop-up informational boxes introduce you to the main features. Click **Next** to take a tour through the informational boxes.
-
- ![Click Next to take tour.](./images/click-next-to-take-tour.png " ")
-
- After touring through the informational boxes, keep this SQL Worksheet open.
-
- Please **proceed to the next lab.**
-
-## Want to Learn More?
-
-Click [here](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/sql-developer-web.html#GUID-102845D9-6855-4944-8937-5C688939610F) for documentation on connecting with the built-in SQL Developer Web.
-
-## Acknowledgements
-
-- **Author** - Richard Green, Principal Developer, Database User Assistance
-- **Last Updated By/Date** - Shilpa Sharma, March 2023
diff --git a/db-quickstart/db-connect-sqldev-web/files/create_external_tables_without_base_url.txt b/db-quickstart/db-connect-sqldev-web/files/create_external_tables_without_base_url.txt
deleted file mode 100644
index 807433322..000000000
--- a/db-quickstart/db-connect-sqldev-web/files/create_external_tables_without_base_url.txt
+++ /dev/null
@@ -1,237 +0,0 @@
-/* Specify the URL that you copied from your files in OCI Object Storage in the define base_URL line below*/
-/* change idthydc0kinr to your real namespace. The name is case-sensitive. */
-/* change ADWCLab to your real bucket name. The name is case-sensitive. */
-/* change us-phoenix-1 to your real region name. The name is case-sensitive. */
-/* you can find these values on the OCI Console .. Storage .. Object Storage screen */
-
-/* set define on */
-
-/* define base_URL='https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o' */
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'CHANNELS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/chan_v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true'),
- column_list => 'CHANNEL_ID NUMBER,
- CHANNEL_DESC VARCHAR2(20),
- CHANNEL_CLASS VARCHAR2(20),
- CHANNEL_CLASS_ID NUMBER,
- CHANNEL_TOTAL VARCHAR2(13),
- CHANNEL_TOTAL_ID NUMBER'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'COUNTRIES_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/coun_v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true'),
- column_list => 'COUNTRY_ID NUMBER ,
- COUNTRY_ISO_CODE CHAR(2) ,
- COUNTRY_NAME VARCHAR2(40) ,
- COUNTRY_SUBREGION VARCHAR2(30) ,
- COUNTRY_SUBREGION_ID NUMBER ,
- COUNTRY_REGION VARCHAR2(20) ,
- COUNTRY_REGION_ID NUMBER ,
- COUNTRY_TOTAL VARCHAR2(11) ,
- COUNTRY_TOTAL_ID NUMBER ,
- COUNTRY_NAME_HIST VARCHAR2(40)'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'CUSTOMERS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/cust1v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS'),
- column_list => 'CUST_ID NUMBER ,
- CUST_FIRST_NAME VARCHAR2(20) ,
- CUST_LAST_NAME VARCHAR2(40) ,
- CUST_GENDER CHAR(1) ,
- CUST_YEAR_OF_BIRTH NUMBER(4,0) ,
- CUST_MARITAL_STATUS VARCHAR2(20),
- CUST_STREET_ADDRESS VARCHAR2(40) ,
- CUST_POSTAL_CODE VARCHAR2(10) ,
- CUST_CITY VARCHAR2(30) ,
- CUST_CITY_ID NUMBER ,
- CUST_STATE_PROVINCE VARCHAR2(40) ,
- CUST_STATE_PROVINCE_ID NUMBER ,
- COUNTRY_ID NUMBER ,
- CUST_MAIN_PHONE_NUMBER VARCHAR2(25) ,
- CUST_INCOME_LEVEL VARCHAR2(30),
- CUST_CREDIT_LIMIT NUMBER,
- CUST_EMAIL VARCHAR2(50),
- CUST_TOTAL VARCHAR2(14) ,
- CUST_TOTAL_ID NUMBER ,
- CUST_SRC_ID NUMBER,
- CUST_EFF_FROM DATE,
- CUST_EFF_TO DATE,
- CUST_VALID VARCHAR2(1)'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'SUPPLEMENTARY_DEMOGRAPHICS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/dem1v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true'),
- column_list => 'CUST_ID NUMBER ,
- EDUCATION VARCHAR2(21),
- OCCUPATION VARCHAR2(21),
- HOUSEHOLD_SIZE VARCHAR2(21),
- YRS_RESIDENCE NUMBER,
- AFFINITY_CARD NUMBER(10,0),
- BULK_PACK_DISKETTES NUMBER(10,0),
- FLAT_PANEL_MONITOR NUMBER(10,0),
- HOME_THEATER_PACKAGE NUMBER(10,0),
- BOOKKEEPING_APPLICATION NUMBER(10,0),
- PRINTER_SUPPLIES NUMBER(10,0),
- Y_BOX_GAMES NUMBER(10,0),
- OS_DOC_SET_KANJI NUMBER(10,0),
- COMMENTS VARCHAR2(4000)'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'PRODUCTS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/prod1v3.dat',
- format => json_object('delimiter' value '|', 'quote' value '^', 'ignoremissingcolumns' value 'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true'),
- column_list => 'PROD_ID NUMBER(6,0) ,
- PROD_NAME VARCHAR2(50) ,
- PROD_DESC VARCHAR2(4000) ,
- PROD_SUBCATEGORY VARCHAR2(50) ,
- PROD_SUBCATEGORY_ID NUMBER ,
- PROD_SUBCATEGORY_DESC VARCHAR2(2000) ,
- PROD_CATEGORY VARCHAR2(50) ,
- PROD_CATEGORY_ID NUMBER ,
- PROD_CATEGORY_DESC VARCHAR2(2000) ,
- PROD_WEIGHT_CLASS NUMBER(3,0) ,
- PROD_UNIT_OF_MEASURE VARCHAR2(20),
- PROD_PACK_SIZE VARCHAR2(30) ,
- SUPPLIER_ID NUMBER(6,0) ,
- PROD_STATUS VARCHAR2(20) ,
- PROD_LIST_PRICE NUMBER(8,2) ,
- PROD_MIN_PRICE NUMBER(8,2) ,
- PROD_TOTAL VARCHAR2(13) ,
- PROD_TOTAL_ID NUMBER ,
- PROD_SRC_ID NUMBER,
- PROD_EFF_FROM DATE,
- PROD_EFF_TO DATE,
- PROD_VALID VARCHAR2(1)'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'PROMOTIONS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/prom1v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true'),
- column_list => 'PROMO_ID NUMBER(6,0) ,
- PROMO_NAME VARCHAR2(30) ,
- PROMO_SUBCATEGORY VARCHAR2(30) ,
- PROMO_SUBCATEGORY_ID NUMBER ,
- PROMO_CATEGORY VARCHAR2(30) ,
- PROMO_CATEGORY_ID NUMBER ,
- PROMO_COST NUMBER(10,2) ,
- PROMO_BEGIN_DATE DATE ,
- PROMO_END_DATE DATE ,
- PROMO_TOTAL VARCHAR2(15) ,
- PROMO_TOTAL_ID NUMBER '
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'SALES_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/sale1v3.dat,&base_URL/dmsal_v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD', 'blankasnull' value 'true'),
- column_list => 'PROD_ID NUMBER ,
- CUST_ID NUMBER ,
- TIME_ID DATE ,
- CHANNEL_ID NUMBER ,
- PROMO_ID NUMBER ,
- QUANTITY_SOLD NUMBER(10,2) ,
- AMOUNT_SOLD NUMBER(10,2)'
- );
-end;
-/
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'TIMES_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/time_v3.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true'),
- column_list => 'TIME_ID DATE ,
- DAY_NAME VARCHAR2(9) ,
- DAY_NUMBER_IN_WEEK NUMBER(1,0) ,
- DAY_NUMBER_IN_MONTH NUMBER(2,0) ,
- CALENDAR_WEEK_NUMBER NUMBER(2,0) ,
- FISCAL_WEEK_NUMBER NUMBER(2,0) ,
- WEEK_ENDING_DAY DATE ,
- WEEK_ENDING_DAY_ID NUMBER ,
- CALENDAR_MONTH_NUMBER NUMBER(2,0) ,
- FISCAL_MONTH_NUMBER NUMBER(2,0) ,
- CALENDAR_MONTH_DESC VARCHAR2(8) ,
- CALENDAR_MONTH_ID NUMBER ,
- FISCAL_MONTH_DESC VARCHAR2(8) ,
- FISCAL_MONTH_ID NUMBER ,
- DAYS_IN_CAL_MONTH NUMBER ,
- DAYS_IN_FIS_MONTH NUMBER ,
- END_OF_CAL_MONTH DATE ,
- END_OF_FIS_MONTH DATE ,
- CALENDAR_MONTH_NAME VARCHAR2(9) ,
- FISCAL_MONTH_NAME VARCHAR2(9) ,
- CALENDAR_QUARTER_DESC CHAR(7) ,
- CALENDAR_QUARTER_ID NUMBER ,
- FISCAL_QUARTER_DESC CHAR(7) ,
- FISCAL_QUARTER_ID NUMBER ,
- DAYS_IN_CAL_QUARTER NUMBER ,
- DAYS_IN_FIS_QUARTER NUMBER ,
- END_OF_CAL_QUARTER DATE ,
- END_OF_FIS_QUARTER DATE ,
- CALENDAR_QUARTER_NUMBER NUMBER(1,0) ,
- FISCAL_QUARTER_NUMBER NUMBER(1,0) ,
- CALENDAR_YEAR NUMBER(4,0) ,
- CALENDAR_YEAR_ID NUMBER ,
- FISCAL_YEAR NUMBER(4,0) ,
- FISCAL_YEAR_ID NUMBER ,
- DAYS_IN_CAL_YEAR NUMBER ,
- DAYS_IN_FIS_YEAR NUMBER ,
- END_OF_CAL_YEAR DATE ,
- END_OF_FIS_YEAR DATE '
- );
-end;
-/
-
-
-begin
- dbms_cloud.create_external_table(
- table_name =>'COSTS_EXT',
- credential_name =>'OBJ_STORE_CRED',
- file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/idthydc0kinr/b/ADWCLab/o/costs.dat',
- format => json_object('ignoremissingcolumns' value 'true', 'dateformat' value 'YYYY-MM-DD', 'blankasnull' value 'true'),
- column_list => 'PROD_ID NUMBER ,
- TIME_ID DATE ,
- PROMO_ID NUMBER ,
- CHANNEL_ID NUMBER ,
- UNIT_COST NUMBER(10,2) ,
- UNIT_PRICE NUMBER(10,2) '
- );
-end;
-/
diff --git a/db-quickstart/db-connect-sqldev-web/files/query_external_data.txt b/db-quickstart/db-connect-sqldev-web/files/query_external_data.txt
deleted file mode 100644
index 7dcc1fb2b..000000000
--- a/db-quickstart/db-connect-sqldev-web/files/query_external_data.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-SELECT c.cust_id, t.calendar_quarter_desc, TO_CHAR (SUM(amount_sold),
- '9,999,999,999.99') AS Q_SALES, TO_CHAR(SUM(SUM(amount_sold))
-OVER (PARTITION BY c.cust_id ORDER BY c.cust_id, t.calendar_quarter_desc
-ROWS UNBOUNDED
-PRECEDING), '9,999,999,999.99') AS CUM_SALES
- FROM sales_ext s, times t, customers_ext c
- WHERE s.time_id=t.time_id AND s.cust_id=c.cust_id AND t.calendar_year=2000
- AND c.cust_id IN (2595, 9646, 11111)
- GROUP BY c.cust_id, t.calendar_quarter_desc
- ORDER BY c.cust_id, t.calendar_quarter_desc;
diff --git a/db-quickstart/db-connect-sqldev-web/images/click-database-actions-button.png b/db-quickstart/db-connect-sqldev-web/images/click-database-actions-button.png
deleted file mode 100644
index a8b6b075e..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/click-database-actions-button.png and /dev/null differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/click-sql.png b/db-quickstart/db-connect-sqldev-web/images/click-sql.png
deleted file mode 100644
index 0e7ccfe6f..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/click-sql.png and /dev/null differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/database-actions-launch.png b/db-quickstart/db-connect-sqldev-web/images/database-actions-launch.png
deleted file mode 100644
index e1d7f5164..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/database-actions-launch.png and /dev/null differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/enter-admin-password.png b/db-quickstart/db-connect-sqldev-web/images/enter-admin-password.png
deleted file mode 100644
index 8bfe0d7b6..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/enter-admin-password.png and /dev/null differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/enter-admin-username.png b/db-quickstart/db-connect-sqldev-web/images/enter-admin-username.png
deleted file mode 100644
index 281520c75..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/enter-admin-username.png and /dev/null differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/warning.png b/db-quickstart/db-connect-sqldev-web/images/warning.png
deleted file mode 100644
index a4469452d..000000000
Binary files a/db-quickstart/db-connect-sqldev-web/images/warning.png and /dev/null differ
diff --git a/db-quickstart/db-create-schema/db-create-schema.md b/db-quickstart/db-create-schema/db-create-schema.md
index 9d5113e0e..b8bcc8be4 100644
--- a/db-quickstart/db-create-schema/db-create-schema.md
+++ b/db-quickstart/db-create-schema/db-create-schema.md
@@ -307,4 +307,4 @@ Click [here](https://docs.oracle.com/en/database/oracle/oracle-database/19/cncpt
- **Author** - Rick Green, Principal Developer, Database User Assistance
- **Contributor** - Supriya Ananth
- **Adapted for Cloud by** - Rick Green
-- **Last Updated By/Date** - Shilpa Sharma, March 2023
+- **Last Updated By/Date** - Katherine Wardhana, November 2023
diff --git a/db-quickstart/db-familiarize-sh/db-familiarize-sh.md b/db-quickstart/db-familiarize-sh/db-familiarize-sh.md
index 5b450133d..938a76f2c 100644
--- a/db-quickstart/db-familiarize-sh/db-familiarize-sh.md
+++ b/db-quickstart/db-familiarize-sh/db-familiarize-sh.md
@@ -1,22 +1,38 @@
-# Familiarize with the Sales History Sample Schema
+# Familiarize with the SH Sample Schema using SQL Worksheet
## Introduction
-In this lab, you examine the structures and data in the Sales History (SH) sample schema that comes with the database.
+In this lab, you will connect to the database using SQL Worksheet, a browser-based tool that is easily accessible from the Autonomous Data Warehouse or Autonomous Transaction Processing consol and examine the structures and data in the Sales History (SH) sample schema that comes with the database.
Estimated lab time: 10 minutes
### Objectives
+- Learn how to connect to your new autonomous database using SQL Worksheet
+
- Familiarize with the tables and their relationships within the SH sample schema
- Use the DESCRIBE command to examine the details of an SH table
### Prerequisites
-- This lab requires completion of the preceding labs in the Contents menu on the left.
+- This lab requires completion of the preceding labs in the Contents menu on the left.
+
+## Task 1: Connect with SQL Worksheet
+
+Although you can connect to your autonomous database from local PC desktop tools like Oracle SQL Developer, you can conveniently access the browser-based SQL Worksheet directly from your Autonomous Data Warehouse or Autonomous Transaction Processing console.
+
+1. In your database's details page, click the **Database Actions** button.
+
+ ![Click the Database Actions button](./images/click-database-actions-button.png " ")
+
+2. The first time you open SQL Worksheet, a series of pop-up informational boxes introduce you to the main features. Click **Next** to take a tour through the informational boxes.
+
+ ![Click Next to take tour.](./images/click-next-to-take-tour.png " ")
+
+ After touring through the informational boxes, keep this SQL Worksheet open.
-## Task 1: Examine the SH Tables and Their Relationships
+## Task 2: Examine the SH Tables and Their Relationships
A database schema is a collection of metadata that describes the relationship between the data in a database. A schema can be simply described as the "layout" of a database or the blueprint that outlines how data is organized into tables.
@@ -33,7 +49,7 @@ Here is the entity-relationship diagram of the SH schema:
![Entity-relationship diagram of SH schema](./images/sales-history-sh-schema-er-diagram.png " ")
-## Task 2: Use the DESCRIBE Command to Examine the Details of an SH Table
+## Task 3: Use the DESCRIBE Command to Examine the Details of an SH Table
The `DESCRIBE` command provides a description of a specified table or view. The description for tables and views contains the following information:
- Column names
@@ -64,4 +80,4 @@ For more information on the SH schema, see the documentation on [Sample Schemas]
## Acknowledgements
- **Author** - Richard Green, Principal Developer, Database User Assistance
-- **Last Updated By/Date** - Shilpa Sharma, March 2023
+- **Last Updated By/Date** - Katherine Wardhana, November 2023
diff --git a/db-quickstart/db-familiarize-sh/images/click-database-actions-button.png b/db-quickstart/db-familiarize-sh/images/click-database-actions-button.png
new file mode 100644
index 000000000..dcc5839d1
Binary files /dev/null and b/db-quickstart/db-familiarize-sh/images/click-database-actions-button.png differ
diff --git a/db-quickstart/db-connect-sqldev-web/images/click-next-to-take-tour.png b/db-quickstart/db-familiarize-sh/images/click-next-to-take-tour.png
similarity index 100%
rename from db-quickstart/db-connect-sqldev-web/images/click-next-to-take-tour.png
rename to db-quickstart/db-familiarize-sh/images/click-next-to-take-tour.png
diff --git a/db-quickstart/db-provision/db-provision.md b/db-quickstart/db-provision/db-provision.md
index 6a2890037..e2f3f4275 100644
--- a/db-quickstart/db-provision/db-provision.md
+++ b/db-quickstart/db-provision/db-provision.md
@@ -39,7 +39,7 @@ Estimated Lab Time: 10 minutes
5. This console shows the existing databases. If there is a long list of databases, you can filter the list by the state of the databases (available, stopped, terminated, and so on). You can also sort by __Workload Type__. Here, the __Data Warehouse__ workload type is selected.
- ![You can filter the list of databases](./images/compartment.png " ")
+ ![You can filter the list of databases](./images/Compartment.png " ")
## Task 2: Create the Autonomous Database Instance
@@ -104,7 +104,7 @@ Estimated Lab Time: 10 minutes
![Choose the network access type.](./images/choose-network-access.png " ")
-9. Choose a license type. For this lab, choose __License Included__. The two license types are:
+9. Choose a license type. For this lab, keep the license type as __License Included__. The two license types are:
- __Bring Your Own License (BYOL)__ - Select this type when your organization has existing database licenses.
- __License Included__ - Select this type when you want to subscribe to new database software licenses and the database cloud service.
@@ -132,4 +132,4 @@ Click [here](https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-clo
## Acknowledgements
- **Author** - Richard Green, Principal Developer, Database User Assistance
-- **Last Updated By/Date** - Shilpa Sharma, March 2023
+- **Last Updated By/Date** - Katherine Wardhana, November 2023
diff --git a/db-quickstart/db-provision/images/Compartment.png b/db-quickstart/db-provision/images/Compartment.png
index 7e9c8da29..ff12ffdfb 100644
Binary files a/db-quickstart/db-provision/images/Compartment.png and b/db-quickstart/db-provision/images/Compartment.png differ
diff --git a/db-quickstart/db-provision/images/choose-license-type.png b/db-quickstart/db-provision/images/choose-license-type.png
index b1794d819..77684f87a 100644
Binary files a/db-quickstart/db-provision/images/choose-license-type.png and b/db-quickstart/db-provision/images/choose-license-type.png differ
diff --git a/db-quickstart/db-provision/images/choose-workload-type.png b/db-quickstart/db-provision/images/choose-workload-type.png
index fafcea060..bf36b5c13 100644
Binary files a/db-quickstart/db-provision/images/choose-workload-type.png and b/db-quickstart/db-provision/images/choose-workload-type.png differ
diff --git a/db-quickstart/db-provision/images/click-create-autonomous-database.png b/db-quickstart/db-provision/images/click-create-autonomous-database.png
index 56a4de72e..74f6f88ab 100644
Binary files a/db-quickstart/db-provision/images/click-create-autonomous-database.png and b/db-quickstart/db-provision/images/click-create-autonomous-database.png differ
diff --git a/db-quickstart/db-provision/images/compartment-name.png b/db-quickstart/db-provision/images/compartment-name.png
index 688c721ce..4e717c68b 100644
Binary files a/db-quickstart/db-provision/images/compartment-name.png and b/db-quickstart/db-provision/images/compartment-name.png differ
diff --git a/db-quickstart/db-provision/images/configure-db-ecpu.png b/db-quickstart/db-provision/images/configure-db-ecpu.png
index d4ea26139..02f6b99f2 100644
Binary files a/db-quickstart/db-provision/images/configure-db-ecpu.png and b/db-quickstart/db-provision/images/configure-db-ecpu.png differ
diff --git a/db-quickstart/db-provision/images/configure-db.png b/db-quickstart/db-provision/images/configure-db.png
deleted file mode 100644
index cff9c5fcf..000000000
Binary files a/db-quickstart/db-provision/images/configure-db.png and /dev/null differ
diff --git a/db-quickstart/db-provision/images/deployment-type.png b/db-quickstart/db-provision/images/deployment-type.png
index 958a025e6..cdd13305e 100644
Binary files a/db-quickstart/db-provision/images/deployment-type.png and b/db-quickstart/db-provision/images/deployment-type.png differ
diff --git a/db-quickstart/db-provision/images/instance-will-begin-provisioning.png b/db-quickstart/db-provision/images/instance-will-begin-provisioning.png
index f4ca40619..5fe55e7ef 100644
Binary files a/db-quickstart/db-provision/images/instance-will-begin-provisioning.png and b/db-quickstart/db-provision/images/instance-will-begin-provisioning.png differ
diff --git a/db-quickstart/db-query/db-query.md b/db-quickstart/db-query/db-query.md
index f8fb1f810..01abd400a 100644
--- a/db-quickstart/db-query/db-query.md
+++ b/db-quickstart/db-query/db-query.md
@@ -125,4 +125,4 @@ Click [here](https://docs.oracle.com/en/database/oracle/oracle-database/19/cncpt
- **Author** - Rick Green, Principal Developer, Database User Assistance
- **Contributor** - Supriya Ananth
- **Adapted for Cloud by** - Rick Green
-- **Last Updated By/Date** - Shilpa Sharma, March 2023
+- **Last Updated By/Date** - Katherine Wardhana, November 2023
diff --git a/db-quickstart/workshops/freetier/index.html b/db-quickstart/workshops/sandbox/index.html
similarity index 100%
rename from db-quickstart/workshops/freetier/index.html
rename to db-quickstart/workshops/sandbox/index.html
diff --git a/db-quickstart/workshops/livelabs/manifest.json b/db-quickstart/workshops/sandbox/manifest.json
similarity index 77%
rename from db-quickstart/workshops/livelabs/manifest.json
rename to db-quickstart/workshops/sandbox/manifest.json
index a1bd4c6d1..6f8f2c972 100644
--- a/db-quickstart/workshops/livelabs/manifest.json
+++ b/db-quickstart/workshops/sandbox/manifest.json
@@ -20,22 +20,17 @@
"filename": "./../../db-provision/db-provision.md"
},
{
- "title": "Lab 2: Connect to the Database with SQL Worksheet",
- "description": "Connect to the Database with SQL Worksheet",
- "filename": "./../../db-connect-sqldev-web/db-connect-sqldev-web.md"
- },
- {
- "title": "Lab 3: Familiarize with the SH Sample Schema",
+ "title": "Lab 2: Familiarize with the SH Sample Schema using SQL Worksheet",
"description": "Familiarize with the SH Sample Schema",
"filename": "./../../db-familiarize-sh/db-familiarize-sh.md"
},
{
- "title": "Lab 4: Query the SH Sample Schema",
+ "title": "Lab 3: Query the SH Sample Schema",
"description": "Query the SH Sample Schema",
"filename": "./../../db-query/db-query.md"
},
{
- "title": "Lab 5: Create a Schema",
+ "title": "Lab 4: Create a Schema",
"description": "Create a Schema",
"filename": "./../../db-create-schema/db-create-schema.md"
},
diff --git a/db-quickstart/workshops/livelabs/index.html b/db-quickstart/workshops/tenancy/index.html
similarity index 100%
rename from db-quickstart/workshops/livelabs/index.html
rename to db-quickstart/workshops/tenancy/index.html
diff --git a/db-quickstart/workshops/freetier/manifest.json b/db-quickstart/workshops/tenancy/manifest.json
similarity index 76%
rename from db-quickstart/workshops/freetier/manifest.json
rename to db-quickstart/workshops/tenancy/manifest.json
index 6fae8816a..309e677d1 100644
--- a/db-quickstart/workshops/freetier/manifest.json
+++ b/db-quickstart/workshops/tenancy/manifest.json
@@ -18,23 +18,18 @@
"description": "Provision Autonomous Database",
"filename": "./../../db-provision/db-provision.md"
},
- {
- "title": "Lab 2: Connect to the Database with SQL Worksheet",
- "description": "Connect to the Database with SQL Worksheet",
- "filename": "./../../db-connect-sqldev-web/db-connect-sqldev-web.md"
- },
{
- "title": "Lab 3: Familiarize with the SH Sample Schema",
+ "title": "Lab 2: Familiarize with the SH Sample Schema using SQL Worksheet",
"description": "Familiarize with the SH Sample Schema",
"filename": "./../../db-familiarize-sh/db-familiarize-sh.md"
},
{
- "title": "Lab 4: Query the SH Sample Schema",
+ "title": "Lab 3: Query the SH Sample Schema",
"description": "Query the SH Sample Schema",
"filename": "./../../db-query/db-query.md"
},
{
- "title": "Lab 5: Create a Schema",
+ "title": "Lab 4: Create a Schema",
"description": "Create a Schema",
"filename": "./../../db-create-schema/db-create-schema.md"
},
diff --git a/dms-online/introduction/introduction.md b/dms-online/introduction/introduction.md
index 6121c192b..a76101f96 100644
--- a/dms-online/introduction/introduction.md
+++ b/dms-online/introduction/introduction.md
@@ -1,6 +1,6 @@
# Introduction
-The labs in this workshop will walk you through all the steps to get started using Oracle Cloud Infrastructure (OCI) Database Migration (DMS). You will provision a Virtual Cloud Network (VCN), an Oracle Database 19c instance, and an Oracle Autonomous Database (ADB) instance and deploy a GoldenGate instance from marketplace to perform a database migration using DMS.
+The labs in this workshop will walk you through all the steps to get started using Oracle Cloud Infrastructure (OCI) Database Migration (DMS). You will provision a Virtual Cloud Network (VCN), an Oracle Database 19c instance, and an Oracle Autonomous Database (ADB) instance to perform a database migration using DMS.
With DMS we make it quick and easy for you to migrate databases from on-premises, Oracle or third-party cloud into Oracle databases on OCI.
@@ -17,18 +17,18 @@ DMS provides high performance, fully managed approach to migrating databases fro
* **Offline**: The Migration makes a point-in-time copy of the source to the target database. Any changes to the source database during migration are not copied, requiring any applications to stay offline for the duration of the migration.
* **Online**: The Migration makes a point-in-time copy and replicates all subsequent changes from the source to the target database. This allows applications to stay online during the migration and then be switched over from source to target database.
-In the current release of DMS we support Oracle databases located on-premises, in third-party clouds, or on OCI as the source and Oracle Autonomous Database serverless or dedicated as the target database. Below is a table of supported configurations;
+In the current release of DMS we support Oracle databases located on-premises, in third-party clouds, or on OCI as the source. The supported targets are in OCI, below is a table of supported configurations;
| | |
|--------------------------|-------------------------|
-| Source Databases | Oracle DB 11g, 12c, 18c, 19c ,21c: on-premises, third-party cloud, OCI |
+| Source Databases | Oracle DB 11g, 12c, 18c, 19c ,21c: on-premises, third-party cloud, OCI. |
| Target Databases | ADB serverless and dedicated Co-managed Oracle Base Database (VM, BM) Exadata on Oracle Public Cloud. |
-| Supported Source Environments| Oracle Cloud Infrastructure co-managed databases or on-premises environments Amazon Web Services RDS Oracle Database (both offline and online migrations) Linux-x86-64, IBM AIX (both offline and online modes) Oracle Solaris (offline mode only)|
+| Supported Source Environments| Oracle Cloud Infrastructure co-managed databases or on-premises environments Amazon Web Services RDS Oracle Database Linux-x86-64, IBM AIX Oracle Solaris |
| Migration Modes | Direct Access to Source (VPN or Fast Connect) Indirect Access to Source (Agent on Source Env) | |
| Initial Load (Offline Migration) | Logical Migration using Data Pump to Object Store Data Pump using SQLnet | |
-| Replication (Online Migration) | GoldenGate Marketplace |
+| Replication (Online Migration) | GoldenGate Integrated Service GoldenGate Marketplace |
-The DMS service runs as a managed cloud service separate from the user's tenancy and resources. The service operates as a multitenant service in a DMS Service Tenancy and communicates with the user's resources using Private Endpoints (PEs). PEs are managed by DMS and are transparent to the user.
+The DMS service runs as a managed cloud service separated from the user's tenancy and resources. The service operates as a multitenant service in a DMS Service Tenancy and communicates with the user's resources using Private Endpoints (PEs). PEs are managed by DMS and are transparent to the user.
![dms topology](images/dms-simplified-topology-2.png =80%x*)
@@ -36,7 +36,7 @@ The DMS service runs as a managed cloud service separate from the user's tenancy
* **DMS Data Plane**: Managed by DMS Control Plane and transparent to the user. The GGS Data Plane manages ongoing migration jobs and communicates with the user's databases and GoldenGate instance using PEs. The DMS data plane does not store any customer data, as data flows through GoldenGate and Data Pump directly within the user's tenancy.
* **Migration**: A Migration contains metadata for migrating one database. It contains information about source, target, and migration methods and is the central object for users to run migrations. After creating a migration, a user can validate the correctness of the environment and then run the migration to perform the copy of database data and schema metadata from source to target.
* **Migration Job**: A Migration Job displays the state or a given Migration execution, either for validation or migration purposes. A job consists of a number of sequential phases, users can opt to wait after a given phase for user input to resume with the following phase.
-* **Registered Database**: A Registered Database represents information about a source or target database, such as connection and authentication credentials. DMS uses the OCI Vault to store credentials. A Registered Database is reusable across multiple Migrations.
+* **Database Connection**: A Database Connection represents information about a source or target database, such as connection details and authentication credentials. DMS uses the OCI Vault to store credentials. A Database Connection is reusable across multiple Migrations.
Estimated Lab Time: 180 minutes -- this estimate is for the entire workshop - it is the sum of the estimates provided for each of the labs included in the workshop.
@@ -49,8 +49,7 @@ In this lab, you will:
* Create a Vault
* Create Databases
* Create an Object Storage Bucket
-* Deploy a GoldenGate marketplace instance
-* Create Registered Databases
+* Create Database Connections
* Create, Validate, and Run a Migration
### Prerequisites
diff --git a/dms-online/prepare-source-and-target-databases/prepare-source-and-target-databases.md b/dms-online/prepare-source-and-target-databases/prepare-source-and-target-databases.md
index 45fcb4980..4923f2f70 100644
--- a/dms-online/prepare-source-and-target-databases/prepare-source-and-target-databases.md
+++ b/dms-online/prepare-source-and-target-databases/prepare-source-and-target-databases.md
@@ -23,10 +23,12 @@ In this lab, you will:
1. Verify that you are user 'opc' in your instance.
-2. Switch from 'opc' user to user 'oracle'.
+2. Switch from 'opc' user to user 'oracle' and create a new directory in the user volume, this directory will be used to temporary storage of database export
+files:
```
sudo su - oracle
+ mkdir /u01/app/oracle/dumpdir
```
@@ -115,7 +117,7 @@ In this lab, you will:
## Task 2: Prepare SSL Certificates and Grant ACL Privileges
-For your non-ADB source connectivity, you must perform the following steps:
+For your source database connectivity, you must perform the following steps:
1. Create a new directory:
```
@@ -127,7 +129,7 @@ For your non-ADB source connectivity, you must perform the following steps:
2. Download a pre created SSL wallet using the following command:
```
- curl -o walletSSL.zip https://objectstorage.us-ashburn-1.oraclecloud.com/p/jrzh3heRr9SzuC7HtQ5Tno5Qs-Yvj0ZX22WNnoZ9FhTpgn9I9-iQQE7-L1JuIFJZ/n/idgd2rlycmdl/b/SSL_Wallet/o/walletSSL.zip
+ curl -o walletSSL.zip https://objectstorage.us-phoenix-1.oraclecloud.com/p/FSBC_LRRpLxcSuSM6yRjO9u1TDuDy8wuiawEIl8Q_xPYFmvap_tPFdtm_c6TskV_/n/axsdric7bk0y/b/SSL-Wallet-For-No-SSH-Migrations-Setup/o/walletSSL.zip
```
@@ -148,14 +150,14 @@ For your non-ADB source connectivity, you must perform the following steps:
5. Save this path location, you will need it during the migration creation, once there populate the SSL Wallet Path with it:
- i.e: /u01/app/oracle/dumpdir/wallet/opt/oracle/dcs/commonstore/wallets/newssl
+ i.e: /u01/app/oracle/dumpdir/wallet
6. The user performing the export or import requires the necessary network ACL to be granted to access the network from the source and target database host. For this guide, run the following commands as SYS if the export or import user is SYSTEM. Since your database is multitenant, the following actions need to be performed in CDB$ROOT. Replace clouduser and sslwalletdir accordingly:
```
define clouduser='system';/*user performing export at source or import at target*/
-define sslwalletdir='/u01/app/oracle/dumpdir/wallet/opt/oracle/dcs/commonstore/wallets/newssl';/* OCI wallet path*/
+define sslwalletdir='/u01/app/oracle/dumpdir/wallet'; /* OCI wallet path*/
BEGIN
dbms_network_acl_admin.append_host_ace(host => '*', lower_port => 443, upper_port => 443, ace => xs$ace_type(privilege_list => xs$name_list(
'http', 'http_proxy'), principal_name => upper('&clouduser'), principal_type => xs_acl.ptype_db));
@@ -256,7 +258,7 @@ You should see a similar output to the following:
```
-4. After connecting to your container database create the user 'HR01'. Write down or save the password as you will need it later.
+4. After connecting to your pluggable database create the user 'HR01'. Write down or save the password as you will need it later.
```
CREATE USER HR01 IDENTIFIED BY ;
@@ -342,7 +344,7 @@ You should see a similar output to the following:
```
Your source DB now has a user HR01 with a table EMPL that has 1000 rows.
-5. This table is to demonstrate the Cloud Pre Migration advisor (CPAT) functionality during Validation on Lab 8.
+5. This table is to demonstrate the Cloud Pre Migration advisor (CPAT) functionality during **Validate and Run Migration** Lab.
```
CREATE TABLE image_table2 ( id NUMBER, image ORDImage ) LOB(image.source.localData) STORE AS SECUREFILE;
@@ -386,55 +388,31 @@ To perform the migration, DMS will require several passwords, for simplicity, le
## Task 6: Enable ggadmin user on target database
-The next steps will connect to the target ADB instance and enable the standard ggadmin user. You can skip these steps if the user is already enabled.
-The connection will be thru the Oracle GoldenGate instance using sqlplus.
+At this point it is assumed that you are connected to your source database. The next steps will connect to the target ADB instance and enable the standard ggadmin user. You can skip these steps if the user is already enabled.
-Make sure the Autonomous Database regional wallet has been placed in /u02/deployments/Marketplace/etc/adb. If not, you can download the zip file from OCI Console and unzip it there.
-Modify sqlnet.ora so it correctly has the wallet location (needed if connecting with sqlplus):
+Make sure that your Autonomous Database mTLS authentication option is marked as ‘Not required’, you can check this by going to Autonomous Database details.
-1. Enter the following commands:
+Go to Database connection/ Connection settings section and select TLS from the TLS authentication list of values, then copy the connection string for one of the TNS names.
- ```
-
- cat sqlnet.ora
- WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/u02/deployments/Marketplace/etc/adb"))) SSL_SERVER_DN_MATCH=yes
+Connect to sqlplus:
-
- ```
-2. You need to set the following Export variables:
- ```
-
- EXPORT ORACLE_HOME="/U01/APP/OGG/LIB/INSTANTCLIENT"
-
- ```
- ```
-
- EXPORT LD_LIBRARY_PATH="$ORACLE_HOME"
-
- ```
- ```
-
- EXPORT PATH="$ORACLE_HOME:$PATH"
-
- ```
- ```
-
- EXPORT TNS_ADMIN="/U02/DEPLOYMENTS/MARKETPLACE/ETC/ADB"
-
- ```
- ```
+1. Enter the following commands:
+
+ ```
- $ORACLE_HOME/SQLPLUS ADMIN/ @ ADW_name
+ sqlplus admin/ @ ATP connection string
```
+
2. In SQL Plus enter the following commands:
- ```
+ ```
alter user ggadmin identified by account unlock;
```
+
3. Exit SQL.
```
diff --git a/dms-online/register-and-migrate-terraform/create-registered-databases-terraform.md b/dms-online/register-and-migrate-terraform/create-registered-databases-terraform.md
index 5c80d796a..35730a465 100644
--- a/dms-online/register-and-migrate-terraform/create-registered-databases-terraform.md
+++ b/dms-online/register-and-migrate-terraform/create-registered-databases-terraform.md
@@ -2,7 +2,7 @@
## Introduction
-This lab walks you through the steps to register a database for use with DMS. Registered database resources enable networking and connectivity for the source and target databases
+This lab walks you through the steps to create a database connection to use with DMS. Database connection resources enable networking and connectivity for the source and target databases.
Estimated Lab Time: 20 minutes
@@ -12,9 +12,9 @@ Watch the video below for a quick walk-through of the lab.
### Objectives
In this lab, you will:
-* Create Registered Database for Source CDB
-* Create Registered Database for Source PDB
-* Create Registered Database for Target ADB
+* Create a Database Connection for Source CDB
+* Create a Database Connection for Source PDB
+* Create a Database Connection for Target ADB
* Create a Migration
### Prerequisites
@@ -26,37 +26,20 @@ In this lab, you will:
* Source DB PDB Service Name available in Terraform output
* Database Administrator Password available in Terraform output
-## Task 1: Download generated private key from Object Storage
-
-In this task you need to download a private key file to your local machine to be used to register databases in this lab. Please be advised that this private key is different from any keys you have provided to LiveLabs, it has been generated specifically for you to access the database and GoldenGate environments provided by the lab.
-
-1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Storage > Object Storage & Archive Storage > Buckets**
-
-![Screenshot of Object Storage navigation](images/buckets-navigation.png =90%x*)
-
-2. If you see an error message or are not yet in the compartment assigned to you by LiveLabs, please change to the correct compartment in the left hand compartment menu. The compartment will be **(root) > Livelabs > LL#####-COMPARTMENT**, with ##### being your user number
-
-3. Select the bucket named **DMSStorage-#####** with ##### being the number of your user.
-![Screenshot of buckets list](images/buckets-list.png =90%x*)
-
-4. In the Objects list of bucket **DMSStorage-#####**, there is a file named **privatekey.txt**. Click on the right-hand context menu on the row and select **Download**. You can locate the file in the download folder of your browser.
-![Screenshot of private key file download](images/buckets-download.png =90%x*)
-
-## Task 2: Create Registered Database for Source CDB
+## Task 1: Create Database Connection for Source CDB
For this task you need the following info from previous steps:
* Source DB Public IP
* Source DB CDB Service Name
* Database Administrator Password
-* Private Key File
-1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration > Registered Databases**
+1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![Screenshot of Registered Databases navigation](images/registered-db.png =90%x*)
+ ![Screenshot of Database Connections navigation](images/db-connection.png =90%x*)
-2. Press **Register Database**
+2. Press **Create connection**
- ![Screenshot of click register db](images/click-register-db.png =90%x*)
+ ![Screenshot of click create connection](images/create-connection.png =90%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourceCDB**
@@ -72,35 +55,33 @@ For this task you need the following info from previous steps:
4. Press **Next**
- ![Screenshot of register DB details and click next](images/register-db-next.png =50%x*)
+ ![Screenshot of database details and click next](images/database-details-cdb.png =50%x*)
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **system**
- - Database Administrator Password: Select **Admin Password** value from Terraform output
- - SSH Database Server Hostname: Select **DBCS Public IP** value from Terraform output
- - SSH Private Key: Select private key file saved earlier
- - SSH Username: **opc**
- - SSH Sudo Location: **/usr/bin/sudo**
+ - Initial load database username: **system**
+ - Initial load database password: Select **Admin Password** value from Terraform output
+ - Select **Use different credentials for replication**
+ - Replication database username: **c##ggadmin**
+ - Replication database password: Select **Admin Password** value from Terraform output
-6. Press **Register**
+6. Press **Create**
- ![Screenshot of confirm register DB](images/register-db-confirm.png =50%x*)
+ ![Screenshot of confirm create connection](images/connection-details-cdb.png =50%x*)
-## Task 3: Create Registered Database for Source PDB
+## Task 2: Create Database Connection for Source PDB
For this task you need the following info from previous steps:
* Source DB Public IP
* Source DB PDB Service Name
* Database Administrator Password
-* Private Key File
-1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration > Registered Databases**
+1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![Screenshot of Registered Databases](images/registered-db.png =90%x*)
+ ![Screenshot of Database Connections navigation](images/db-connection.png =90%x*)
-2. Press **Register Database**
+2. Press **Create connection**
- ![Screenshot of click register db](images/click-register-db.png =90%x*)
+ ![Screenshot of click create connection](images/create-connection.png =90%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourcePDB**
@@ -117,32 +98,31 @@ For this task you need the following info from previous steps:
4. Press **Next**
- ![Screenshot of register db](images/register-db-next-second.png =50%x*)
+ ![Screenshot of database details and click next](images/database-details-pdb.png =50%x*)
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **system**
- - Database Administrator Password: Select **Admin Password** value from Terraform output
- - SSH Database Server Hostname: Select **DBCS Public IP** value from Terraform output
- - SSH Private Key: Select private key file saved earlier
- - SSH Username: **opc**
- - SSH Sudo Location: **/usr/bin/sudo**
+ - Initial load database username: **system**
+ - Initial load database password: Select **Admin Password** value from Terraform output
+ - Select **Use different credentials for replication**
+ - Replication database username: **ggadmin**
+ - Replication database password: Select **Admin Password** value from Terraform output
-6. Press **Register**
+6. Press **Create**
- ![Screenshot of confirm register DB](images/register-db-confirm.png =50%x*)
+ ![Screenshot of confirm create connection](images/connection-details-pdb.png =50%x*)
-## Task 4: Create Registered Database for Target ADB
+## Task 3: Create Database Connection for Target ADB
For this task you need the following info from previous steps:
* Administrator Password
-1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration > Registered Databases**
+1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![Screenshot of Registered Databases](images/registered-db.png =90%x*)
+ ![Screenshot of Database Connections navigation](images/db-connection.png =90%x*)
-2. Press **Register Database**
+2. Press **Create connection**
- ![Screenshot of click register db](images/click-register-db.png =90%x*)
+ ![Screenshot of click create connection](images/create-connection.png =90%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **TargetADB**
@@ -153,24 +133,27 @@ For this task you need the following info from previous steps:
4. Press **Next**
- ![Screenshot of press next after entering details](images/register-adb-1.png =50%x*)
+ ![Screenshot of press next after entering details](images/db-connection-adb.png =50%x*)
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **admin**
- - Database Administrator Password: Select **Admin Password** value from Terraform output
+ - Initial load database username: **admin**
+ - Initial load database password: Select **Admin Password** value from Terraform output
+ - Select **Use different credentials for replication**
+ - Replication database username: **ggadmin**
+ - Replication database password: Select **Admin Password** value from Terraform output
-6. Press **Register**
+6. Press **Create**
- ![Screenshot of confirm db registration](images/confirm-db-registration.png =50%x*)
+ ![Screenshot of confirm db connection](images/confirm-db-connection-adb.png =50%x*)
- Please wait for all Database Registration resources to display as **Active** before proceeding to the next task.
+ Please wait for all Database Connection resources to display as **Active** before proceeding to the next task.
-## Task 5: Create Migration
+## Task 4: Create Migration
- 1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration > Database Migration > Migrations**
+ 1. In the OCI Console Menu ![](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![Screenshot of migration navigation](images/migrations-navigation.png =90%x*)
+ ![Screenshot of Migrations navigation](images/migration-nav.png =90%x*)
2. Press **Create Migration**
@@ -195,38 +178,18 @@ For this task you need the following info from previous steps:
6. On the page **Migration Options**, fill in the following entries, otherwise leave defaults:
- In **Initial Load**, select **Datapump via Object Storage**
- - Object Storage Bucket: **DMSStorage-#####**
- Export Directory Object:
- Name: **dumpdir**
- Path: **/u01/app/oracle/dumpdir**
-
- ![Screenshot for migration options](images/test-migration.png =50%x*)
-
-
- 7. Check **Use Online Replication**
- - GoldenGate Hub URL: **OGG Hub URL IP** value from Terraform output
- - GoldenGate Administrator Username: **oggadmin**
- - GoldenGate Administrator Password: **Admin Password** value from Terraform output
-
- ![Online replication check](images/online-goldengate.png =50%x*)
-
- - Source database:
- - GoldenGate deployment name: **Marketplace**
- - Database Username: **ggadmin**
- - Database Password: **Admin Password** value from Terraform output
- - Container Database Username: **c##ggadmin**
- - Container Database Password: **Admin Password** value from Terraform output
+ - Source Database file system SSL wallet path: **/u01/app/oracle/myserverwallet**
+ - Object Storage Bucket: **DMSStorage-#####**
+ - Select **Use online replication**
- ![Source database details](images/online-source-database.png =50%x*)
-
- - Target database:
- - GoldenGate Deployment Name: **Marketplace**
- - Database Username: **ggadmin**
- - Database Password: **Admin Password** value from Terraform output
+
+ ![Screenshot for migration options](images/test-migration.png =50%x*)
-
- ![Target database details](images/online-target-database-ggocid.png =50%x*)
+
- Press Create to initiate the Migration creation
@@ -241,4 +204,4 @@ You may now [proceed to the next lab](#next).
## Acknowledgments
* **Author** - Alex Kotopoulis, Director, Product Management
* **Contributors** - Kiana McDaniel, Hanna Rakhsha, Killian Lynch, Solution Engineers, Austin Specialist Hub
-* **Last Updated By/Date** - Jorge Martinez, Product Manager, July 2022
+* **Last Updated By/Date** - Jorge Martinez, Product Manager, October 2023
diff --git a/dms-online/register-and-migrate-terraform/images/add-details.png b/dms-online/register-and-migrate-terraform/images/add-details.png
index 9e327ee9d..0a7c3dbc8 100644
Binary files a/dms-online/register-and-migrate-terraform/images/add-details.png and b/dms-online/register-and-migrate-terraform/images/add-details.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/confirm-db-connection-adb.png b/dms-online/register-and-migrate-terraform/images/confirm-db-connection-adb.png
new file mode 100644
index 000000000..4b6cd667b
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/confirm-db-connection-adb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/connection-details-cdb.png b/dms-online/register-and-migrate-terraform/images/connection-details-cdb.png
new file mode 100644
index 000000000..415ced408
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/connection-details-cdb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/connection-details-pdb.png b/dms-online/register-and-migrate-terraform/images/connection-details-pdb.png
new file mode 100644
index 000000000..513c74d7c
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/connection-details-pdb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/create-connection.png b/dms-online/register-and-migrate-terraform/images/create-connection.png
new file mode 100644
index 000000000..440a10d2f
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/create-connection.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/database-details-cdb.png b/dms-online/register-and-migrate-terraform/images/database-details-cdb.png
new file mode 100644
index 000000000..33426660a
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/database-details-cdb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/database-details-pdb.png b/dms-online/register-and-migrate-terraform/images/database-details-pdb.png
new file mode 100644
index 000000000..bbfd83e09
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/database-details-pdb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/db-connection-adb.png b/dms-online/register-and-migrate-terraform/images/db-connection-adb.png
new file mode 100644
index 000000000..698fadbda
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/db-connection-adb.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/db-connection.png b/dms-online/register-and-migrate-terraform/images/db-connection.png
new file mode 100644
index 000000000..a69c0acb9
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/db-connection.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/migration-nav.png b/dms-online/register-and-migrate-terraform/images/migration-nav.png
new file mode 100644
index 000000000..bc2dc62a8
Binary files /dev/null and b/dms-online/register-and-migrate-terraform/images/migration-nav.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/press-create-migration.png b/dms-online/register-and-migrate-terraform/images/press-create-migration.png
index c98391068..7fcf343d8 100644
Binary files a/dms-online/register-and-migrate-terraform/images/press-create-migration.png and b/dms-online/register-and-migrate-terraform/images/press-create-migration.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/select-databases.png b/dms-online/register-and-migrate-terraform/images/select-databases.png
index c44c84cf1..b26479c41 100644
Binary files a/dms-online/register-and-migrate-terraform/images/select-databases.png and b/dms-online/register-and-migrate-terraform/images/select-databases.png differ
diff --git a/dms-online/register-and-migrate-terraform/images/test-migration.png b/dms-online/register-and-migrate-terraform/images/test-migration.png
index 17023ef01..89ba84573 100644
Binary files a/dms-online/register-and-migrate-terraform/images/test-migration.png and b/dms-online/register-and-migrate-terraform/images/test-migration.png differ
diff --git a/dms-online/register-and-migrate/create-registered-databases.md b/dms-online/register-and-migrate/create-registered-databases.md
index 6937cd37f..d01139e60 100644
--- a/dms-online/register-and-migrate/create-registered-databases.md
+++ b/dms-online/register-and-migrate/create-registered-databases.md
@@ -12,7 +12,7 @@ In this lab, you will:
* Create a database connection for Source CDB
* Create a database connection for Source PDB
* Create a database connection for Target ADB
-* Create a Migration
+* Create an online Migration
### Prerequisites
@@ -34,11 +34,11 @@ For this task you need the following info from previous steps:
1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/db-connection.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
2. Press **Create Connection**
- ![Screenshot of click register db](images/click-create-db.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourceCDB**
@@ -58,10 +58,13 @@ For this task you need the following info from previous steps:
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- Initial load database username: **system**
- Initial load database password: <*Admin password*>
+ - Check **Use different credentials for replication**
+ - Replication database username: **c##ggadmin**
+ - Replication database password: <*Admin password*>
6. Press **Create**
- ![Screenshot of confirm register DB](images/create-db-confirm.png =50%x*)
+ ![Screenshot of confirm register DB](images/create-db-confirm.png =40%x*)
## Task 2: Create Database Connection for Source PDB
@@ -72,11 +75,11 @@ For this task you need the following info from previous steps:
1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/db-connection.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
2. Press **Create connection**
- ![Screenshot of click register db](images/click-create-db.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourcePDB**
@@ -96,11 +99,14 @@ For this task you need the following info from previous steps:
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- Initial load database username: **system**
- Initial load database password: <*Admin password*>
+ - Check **Use different credentials for replication**
+ - Replication database username: **ggadmin**
+ - Replication database password: <*Admin password*>
6. Press **Create**
- ![Screenshot of confirm register DB](images/create-db-confirm.png =50%x*)
+ ![Screenshot of confirm register DB](images/create-db-confirm-pdb.png =40%x*)
## Task 3: Create Database Connection for Target ADB
@@ -109,11 +115,11 @@ For this task you need the following info from previous steps:
1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/db-connection.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
2. Press **Create connection**
- ![Screenshot of click register db](images/click-create-db.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **TargetATP**
@@ -129,6 +135,9 @@ For this task you need the following info from previous steps:
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- Initial load database username: **admin**
- Initial load database password: <*Admin password*>
+ - Check the **Use different credentials for replication**
+ - Replication database username: **ggadmin**
+ - Replication database password: <*Admin password*>
6. Press **Create**
@@ -139,18 +148,18 @@ For this task you need the following info from previous steps:
1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![create migration navigation](images/migration-create.png =90%x*)
+ ![create migration navigation](images/migration-create.png =50%x*)
2. Press **Create Migration**
- ![Screenshot of press create migration](images/press-create-migration.png =90%x*)
+ ![Screenshot of press create migration](images/press-create-migration.png =50%x*)
3. On the page **Add Details**, fill in the following entries, otherwise leave defaults:
- Name: **TestMigration**
- Vault: **DMS_Vault**
- Encryption Key: **DMS_Key**
- ![Screenshot to add vault details](images/add-details.png =40%x*)
+ ![Screenshot to add vault details](images/add-details.png =50%x*)
4. Press **Next**
@@ -160,7 +169,7 @@ For this task you need the following info from previous steps:
- Container Database connection: **SourceCDB**
- Target Database: **TargetATP**
- ![Screenshot of source db selection](images/select-databases.png =40%x*)
+ ![Screenshot of source db selection](images/select-databases.png =50%x*)
6. On the page **Migration Options**, fill in the following entries, otherwise leave defaults:
- In **Initial Load**, select **Datapump via Object Storage**
@@ -168,40 +177,15 @@ For this task you need the following info from previous steps:
- Export Directory Object:
- Name: **dumpdir**
- Path: **/u01/app/oracle/dumpdir**
- - Source Database file system SSL wallet path:
- - **/u01/app/oracle/dumpdir/wallet/opt/oracle/dcs/commonstore/wallets/newssl**
-
- ![Screenshot for migration options](images/test-migration-1.png =60%x*)
-
-
- 7. Check **Use Online Replication**
- - GoldenGate Hub URL: **https://(goldengate public IP)**
- - GoldenGate Administrator Username: **oggadmin**
- - GoldenGate Administrator Password: **(As previously selected)**
+ - Source Database file system SSL wallet path, we manually downloaded the required certificates in a previous lab:
+ - **/u01/app/oracle/dumpdir/wallet**
+
+ - Check **Use Online Replication**
- ![Online replication check](images/online-goldengate.png =50%x*)
-
- - Source database:
- - GoldenGate deployment name: **Marketplace**
- - Database Username: **ggadmin**
- - Database Password: **(As previously selected)**
- - Container Database Username: **c##ggadmin**
- - Container Database Password: **(As previously selected)**
-
- ![Source database details](images/online-source-database.png =50%x*)
+ - Press Create to initiate the Migration creation
- - Target database:
- - GoldenGate Deployment Name: **Marketplace**
- - Database Username: **ggadmin**
- - Database Password: **(As previously selected)**
- - Press Show Advanced Options
- - Press Replication tab
- - GoldenGate Instance OCID: **(OCID as copied from GoldenGate compute instance)** (This field is optional; if OCID is given, validation will check for GoldenGate space requirements)
-
-
- ![Target database details](images/online-target-database-ggocid.png =50%x*)
-
- - Press Create to initiate the Migration creation
+ ![Screenshot for migration options](images/test-migration-1.png =50%x*)
+
You may now [proceed to the next lab](#next).
diff --git a/dms-online/register-and-migrate/images/confirm-target-connection.png b/dms-online/register-and-migrate/images/confirm-target-connection.png
index 7e4cc4b33..c7949df1a 100644
Binary files a/dms-online/register-and-migrate/images/confirm-target-connection.png and b/dms-online/register-and-migrate/images/confirm-target-connection.png differ
diff --git a/dms-online/register-and-migrate/images/create-db-confirm-pdb.png b/dms-online/register-and-migrate/images/create-db-confirm-pdb.png
new file mode 100644
index 000000000..0b9870a6c
Binary files /dev/null and b/dms-online/register-and-migrate/images/create-db-confirm-pdb.png differ
diff --git a/dms-online/register-and-migrate/images/create-db-confirm.png b/dms-online/register-and-migrate/images/create-db-confirm.png
index e870d2999..8f5257356 100644
Binary files a/dms-online/register-and-migrate/images/create-db-confirm.png and b/dms-online/register-and-migrate/images/create-db-confirm.png differ
diff --git a/dms-online/register-and-migrate/images/test-migration-1.png b/dms-online/register-and-migrate/images/test-migration-1.png
index 8ed3f8a4e..ff5ad3808 100644
Binary files a/dms-online/register-and-migrate/images/test-migration-1.png and b/dms-online/register-and-migrate/images/test-migration-1.png differ
diff --git a/dms-online/validate-and-run/images/cleanup-completed.png b/dms-online/validate-and-run/images/cleanup-completed.png
index f267483a8..326d2e4aa 100644
Binary files a/dms-online/validate-and-run/images/cleanup-completed.png and b/dms-online/validate-and-run/images/cleanup-completed.png differ
diff --git a/dms-online/validate-and-run/images/click-phases.png b/dms-online/validate-and-run/images/click-phases.png
index 4a1da7df0..f086b104c 100644
Binary files a/dms-online/validate-and-run/images/click-phases.png and b/dms-online/validate-and-run/images/click-phases.png differ
diff --git a/dms-online/validate-and-run/images/monitor-lag-waiting.png b/dms-online/validate-and-run/images/monitor-lag-waiting.png
index 36008095b..98c33e90d 100644
Binary files a/dms-online/validate-and-run/images/monitor-lag-waiting.png and b/dms-online/validate-and-run/images/monitor-lag-waiting.png differ
diff --git a/dms-online/validate-and-run/images/monitor-replication-lag.png b/dms-online/validate-and-run/images/monitor-replication-lag.png
index 4543559c7..db26e9e1e 100644
Binary files a/dms-online/validate-and-run/images/monitor-replication-lag.png and b/dms-online/validate-and-run/images/monitor-replication-lag.png differ
diff --git a/dms-online/validate-and-run/images/press-validate.png b/dms-online/validate-and-run/images/press-validate.png
index 54d4f6e6e..60d41dfcd 100644
Binary files a/dms-online/validate-and-run/images/press-validate.png and b/dms-online/validate-and-run/images/press-validate.png differ
diff --git a/dms-online/validate-and-run/images/resume-job-switchover.png b/dms-online/validate-and-run/images/resume-job-switchover.png
index f0fd60757..08bf6ec76 100644
Binary files a/dms-online/validate-and-run/images/resume-job-switchover.png and b/dms-online/validate-and-run/images/resume-job-switchover.png differ
diff --git a/dms-online/validate-and-run/images/select-testmigration.png b/dms-online/validate-and-run/images/select-testmigration.png
index eea56ede5..ced28c32a 100644
Binary files a/dms-online/validate-and-run/images/select-testmigration.png and b/dms-online/validate-and-run/images/select-testmigration.png differ
diff --git a/dms-online/validate-and-run/images/succeeded.png b/dms-online/validate-and-run/images/succeeded.png
index e2e548e5b..8c5b7b065 100644
Binary files a/dms-online/validate-and-run/images/succeeded.png and b/dms-online/validate-and-run/images/succeeded.png differ
diff --git a/dms-online/validate-and-run/validate-migration.md b/dms-online/validate-and-run/validate-migration.md
index 868c6b2ab..447d3b741 100644
--- a/dms-online/validate-and-run/validate-migration.md
+++ b/dms-online/validate-and-run/validate-migration.md
@@ -22,17 +22,17 @@ In this lab, you will:
1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![create migration navigation](images/migration-create.png =90%x*)
+ ![create migration navigation](images/migration-create.png =50%x*)
2. Select **TestMigration**
- ![Screenshot of select testmigration](images/select-testmigration.png =90%x*)
+ ![Screenshot of select testmigration](images/select-testmigration.png =50%x*)
3. If Migration is still being created, wait until Lifecycle State is Active
4. Press **Validate** button
- ![Screenshot of press validate](images/press-validate.png =90%x*)
+ ![Screenshot of press validate](images/press-validate.png =50%x*)
5. Click on **Jobs** in left-hand **Resources** list
@@ -42,7 +42,7 @@ In this lab, you will:
7. Click on **Phases** in the left-hand **Resources** list
- ![Screnshot of click on phases](images/click-phases.png =20%x*)
+ ![Screnshot of click on phases](images/click-phases.png =17%x*)
8. Phases will be shown, and status will be updated as phases are completed. It can take 2 minutes before the first phase is shown.
@@ -66,15 +66,15 @@ In this lab, you will:
1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![create migration navigation](images/migration-create.png =90%x*)
+ ![create migration navigation](images/migration-create.png =50%x*)
2. Select **TestMigration**
- ![Screenshot of select testmigration](images/select-testmigration.png =90%x*)
+ ![Screenshot of select testmigration](images/select-testmigration.png =50%x*)
3. Press **Start** to begin the Migration. The Start Migration dialog is shown. Select the default phase: **Monitor replication lag**.This will cause the replication to run continuously until the Migration is resumed.
- ![Screenshot of start migration](images/monitor-replication-lag.png =90%x*)
+ ![Screenshot of start migration](images/monitor-replication-lag.png =50%x*)
4. Click on **Jobs** in the left-hand **Resources** list
@@ -86,9 +86,10 @@ In this lab, you will:
8. Wait till **Monitor replication lag** phase completes.
- ![Screenshot of completed phases](images/monitor-lag-waiting.png =90%x*)
+ ![Screenshot of completed phases](images/monitor-lag-waiting.png =50%x*)
9. Now data replication is in progress. **If you want test the replication please continue, otherwise you can jump to step 15**.
+
Go back to your source database and execute the following script:
```
@@ -124,29 +125,16 @@ In this lab, you will:
```
This will insert 1007 records into the source database simulating new transactions which GoldenGate will identify and replicate to the target database.
- 10. Lets review how this is identified in GoldenGate. Log in to the Oracle GoldenGate Service Manager homepage using the GoldenGate Hub Public ip : **https://__ogg\_public\_ip__** (replace the __ogg\_public\_ip__ value with the value saved from previous steps). The browser will show warnings that the page is insecure because it uses a self-signed certificate. Ignore those warnings and proceed. Oracle GoldenGate Service Manager opens. Click on port 9011 to log in to the Source – Administration Server. Use the same credential as Service Manager.
-
- ![Screenshot of Oracle GoldenGate Services Manager Login Menu](./images/gg-migration-manager.png " ")
-
- 11. Use the same credentials as in Service Manager. Click on the available extract and navigate to **Statistics** tab:
-
- ![Screenshot of Oracle GoldenGate Services Manager Extracts](./images/extracts.png " ")
-
- 12. Observe the 1007 inserts we performed on the source database in the previous step:
- ![Screenshot of Oracle GoldenGate table statistics Extract](./images/table-statistics.png " ")
+ 10. Connect to your target ADB and look for the new records in the EMPL table
- 13. Navigate back to Overview Tab and click on the existing replicat and navigate to **Statistics** tab:
- ![Screenshot of Oracle GoldenGate click on replicat](./images/click-on-target-replicats.png " ")
- 14. Observe the 1007 inserts we performed on the source database in the previous step, and how they were replicated to the target:
- ![Screenshot of Oracle GoldenGate table statistics Extract](./images/target-statistics.png " ")
- 15. This is the point where a migration user would stop the source application so that no more transactions are applied to the source DB. You can now press **Resume** on the job to complete replication. In the Resume Job dialog, chose the **Switchover App** phase and press **Resume**. The Switchover App phase will gracefully stop replication and signal the target application to initiate transactions to the target DB.
+ 11. This is the point where a migration user would stop the source application so that no more transactions are applied to the source DB. You can now press **Resume** on the job to complete replication. In the Resume Job dialog, chose the **Switchover** phase and press **Resume**. The Switchover phase will gracefully stop replication and signal the target application to initiate transactions to the target DB.
![Screenshot of resume job switchover](./images/resume-job-switchover.png " ")
-16. After Job resumes and waits after Switchover App phase, press Resume. Select the last phase Cleanup and press Resume:
-![Screenshot of resume job cleanup](./images/resume-job-cleanup.png " ")
+12. After Job resumes and waits after Switchover phase, press Resume. Select the last phase Cleanup and press Resume:
+![Screenshot of resume job cleanup](./images/resume-job-cleanup.png =50%x*)
-17. The migration runs the final cleanup phases and shows as Succeeded when finished:
-![Screenshot of resume job cleanup completed](./images/cleanup-completed.png " ")
+13. The migration runs the final cleanup phases and shows as Succeeded when finished:
+![Screenshot of resume job cleanup completed](./images/cleanup-completed.png =50%x*)
![Screenshot of succeeded Migration](./images/succeeded.png " ")
## Learn More
diff --git a/dms-online/workshops/freetier/manifest.json b/dms-online/workshops/freetier/manifest.json
index e77090be7..f17809811 100644
--- a/dms-online/workshops/freetier/manifest.json
+++ b/dms-online/workshops/freetier/manifest.json
@@ -34,20 +34,16 @@
"filename": "../../create-target-database/create-target-database.md"
},
- {
- "title": "Lab 5: Configure GoldenGate hub ",
- "filename": "../../goldengate-hub/goldengate-hub.md"
- },
- {
- "title": "Lab 6: Prepare source and target databases ",
+ {
+ "title": "Lab 5: Prepare source and target databases ",
"filename": "../../prepare-source-and-target-databases/prepare-source-and-target-databases.md"
},
{
- "title": "Lab 7: Register and Setup Migration",
+ "title": "Lab 6: Register and Setup Migration",
"filename": "../../register-and-migrate/create-registered-databases.md"
},
{
- "title": "Lab 8: Validate and Run Migration",
+ "title": "Lab 7: Validate and Run Migration",
"filename": "../../validate-and-run/validate-migration.md"
},
{
diff --git a/dms-online/workshops/livelabs/manifest.json b/dms-online/workshops/livelabs/manifest.json
index 8ad41ac43..877a81649 100644
--- a/dms-online/workshops/livelabs/manifest.json
+++ b/dms-online/workshops/livelabs/manifest.json
@@ -33,20 +33,17 @@
"title": "Lab 4: Create Target Database",
"filename": "../../create-target-database/create-target-database.md"
},
+
{
- "title": "Lab 5: Configure GoldenGate hub ",
- "filename": "../../goldengate-hub/goldengate-hub.md"
- },
- {
- "title": "Lab 6: Prepare source and target databases ",
+ "title": "Lab 5: Prepare source and target databases ",
"filename": "../../prepare-source-and-target-databases/prepare-source-and-target-databases.md"
},
{
- "title": "Lab 7: Database Connection and Setup Migration",
+ "title": "Lab 6: Database Connection and Setup Migration",
"filename": "../../register-and-migrate/create-registered-databases.md"
},
{
- "title": "Lab 8: Validate and Run Migration",
+ "title": "Lab 7: Validate and Run Migration",
"filename": "../../validate-and-run/validate-migration.md"
},
{
diff --git a/dms/create-source-database/create-source-database.md b/dms/create-source-database/create-source-database.md
index 7feadcbc7..c54cf9fa0 100644
--- a/dms/create-source-database/create-source-database.md
+++ b/dms/create-source-database/create-source-database.md
@@ -100,7 +100,81 @@ The following task is *optional* if a source database is already present.
![Note the Public IP Address and Private IP Address ](images/source-db-ip-addresses.png)
-## Task 3: Adding Data to the Database
+## Task 3: Prepare SSL Certificates and Grant ACL Privileges
+
+For your source database connectivity, you must perform the following steps:
+
+1. Open a SSH terminal to the database instance. The instructions are for Unix-style ssh command:
+
+```
+ ssh -i opc@
+```
+
+2. Switch from 'opc' user to user 'oracle' and create a new directory in the user volume, this directory will be used to store the SSL certificates:
+```
+
+ sudo su - oracle
+ mkdir /u01/app/oracle/dumpdir/wallet
+
+
+```
+3. Download a pre created SSL wallet using the following command:
+```
+
+ curl -o walletSSL.zip https://objectstorage.us-phoenix-1.oraclecloud.com/p/FSBC_LRRpLxcSuSM6yRjO9u1TDuDy8wuiawEIl8Q_xPYFmvap_tPFdtm_c6TskV_/n/axsdric7bk0y/b/SSL-Wallet-For-No-SSH-Migrations-Setup/o/walletSSL.zip
+
+
+```
+4. Unzip the files:
+```
+
+ unzip walletSSL.zip
+
+```
+5. Make sure these files are present in your desired directory path:
+
+ 1. 2022 ewallet.p12.lck
+ 2. cwallet.sso.lck
+ 3. ewallet.p12
+ 4. cwallet.sso
+ 5. addedCertificates.txt
+
+
+6. Save this path location, you will need it during the migration creation, once there populate the SSL Wallet Path with it:
+
+ i.e: /u01/app/oracle/dumpdir/wallet
+
+7. The user performing the export or import requires the necessary network ACL to be granted to access the network from the source and target database host. For this guide, run the following commands as SYS if the export or import user is SYSTEM. Since your database is multitenant, the following actions need to be performed in CDB$ROOT. Replace clouduser and sslwalletdir accordingly:
+
+```
+
+ define clouduser='system';/*user performing export at source or import at target*/
+define sslwalletdir='/u01/app/oracle/dumpdir/wallet'; /* OCI wallet path*/
+BEGIN
+ dbms_network_acl_admin.append_host_ace(host => '*', lower_port => 443, upper_port => 443, ace => xs$ace_type(privilege_list => xs$name_list(
+ 'http', 'http_proxy'), principal_name => upper('&clouduser'), principal_type => xs_acl.ptype_db));
+
+ dbms_network_acl_admin.append_wallet_ace(wallet_path => 'file:&sslwalletdir', ace => xs$ace_type(privilege_list => xs$name_list('use_client_certificates',
+ 'use_passwords'), principal_name => upper('&clouduser'), principal_type => xs_acl.ptype_db));
+
+END;
+
+```
+8. Once the connect privilege is granted, connect as the relevant user such as, SYSTEM, and verify if the privilege is granted using the following query:
+```
+
+ SELECT host, lower_port, upper_port, privilege, status
+ FROM user_network_acl_privileges;
+
+
+```
+
+You should see a similar output to the following:
+
+
+![grants output](./images/grants-output.png)
+
+## Task 4: Adding Data to the Database
1. Open a SSH terminal to the database instance. The instructions are for Unix-style ssh command:
diff --git a/dms/create-source-database/images/grants-output.png b/dms/create-source-database/images/grants-output.png
new file mode 100644
index 000000000..099bca8eb
Binary files /dev/null and b/dms/create-source-database/images/grants-output.png differ
diff --git a/dms/introduction/introduction.md b/dms/introduction/introduction.md
index 6ab37a5ff..7e9b2c72c 100644
--- a/dms/introduction/introduction.md
+++ b/dms/introduction/introduction.md
@@ -20,20 +20,21 @@ In the current release of DMS we support Oracle databases located on-premises, i
| | |
|--------------------------|-------------------------|
| Source Databases | Oracle DB 11g, 12c, 18c, 19c, 21c: on-premises, third-party cloud, OCI |
-| Target Databases | ADB serverless and dedicated Co-managed Oracle Base Database (VM, BM) Exadata on Oracle Public Cloud. |
+| Target Databases | ADB serverless and dedicated Co-managed Oracle Base Database (VM, BM) Exadata on Oracle Public Cloud. |
+| Supported Source Environments | Oracle Cloud Infrastructure co-managed databases or on-premises environments Amazon Web Services RDS Oracle Database Linux-x86-64, IBM AIX Oracle Solaris|
| Migration Modes | Direct Access to Source (VPN or Fast Connect) Indirect Access to Source (Agent on Source Env) | |
| Initial Load (Offline Migration) | Logical Migration using Data Pump to Object Store Data Pump using SQLnet | |
-| Replication (Online Migration) | GoldenGate Marketplace |
+| Replication (Online Migration) | GoldenGate Integrated Service GoldenGate Marketplace |
The DMS service runs as a managed cloud service separate from the user's tenancy and resources. The service operates as a multitenant service in a DMS Service Tenancy and communicates with the user's resources using Private Endpoints (PEs). PEs are managed by DMS and are transparent to the user.
![DMS topology](images/dms-simplified-topology-2.png =80%x*)
-* **DMS Control Plane**: Used by DMS end user to manage Migration and Registered Database objects. The control plane is exposed through the DMS Console UI as well as the REST API.
+* **DMS Control Plane**: Used by DMS end user to manage Migration and Database Connection objects. The control plane is exposed through the DMS Console UI as well as the REST API.
* **DMS Data Plane**: Managed by DMS Control Plane and transparent to the user. The GGS Data Plane manages ongoing migration jobs and communicates with the user's databases and GoldenGate instance using PEs. The DMS data plane does not store any customer data, as data flows through GoldenGate and Data Pump directly within the user's tenancy.
* **Migration**: A Migration contains metadata for migrating one database. It contains information about source, target, and migration methods and is the central object for users to run migrations. After creating a migration, a user can validate the correctness of the environment and then run the migration to perform the copy of database data and schema metadata from source to target.
* **Migration Job**: A Migration Job displays the state or a given Migration execution, either for validation or migration purposes. A job consists of a number of sequential phases, users can opt to wait after a given phase for user input to resume with the following phase.
-* **Registered Database**: A Registered Database represents information about a source or target database, such as connection and authentication credentials. DMS uses the OCI Vault to store credentials. A Registered Database is reusable across multiple Migrations.
+* **Database Connection**: A Database Connection represents information about a source or target database, such as connection and authentication credentials. DMS uses the OCI Vault to store credentials. A Database Connection is reusable across multiple Migrations.
Estimated Lab Time: 180 minutes -- this estimate is for the entire workshop - it is the sum of the estimates provided for each of the labs included in the workshop.
@@ -46,7 +47,7 @@ In this lab, you will:
* Create a Vault
* Create Databases
* Create an Object Storage Bucket
-* Create Registered Databases
+* Create Database Connections
* Create, Validate, and Run a Migration
### Prerequisites
diff --git a/dms/register-and-migrate/create-registered-databases.md b/dms/register-and-migrate/create-registered-databases.md
index cfa603402..5130f2b27 100644
--- a/dms/register-and-migrate/create-registered-databases.md
+++ b/dms/register-and-migrate/create-registered-databases.md
@@ -1,17 +1,17 @@
-# Create Registered Databases
+# Create Database Connections
## Introduction
-This lab walks you through the steps to register a database for use with DMS. Registered database resources enable networking and connectivity for the source and target databases
+This lab walks you through the steps to create a database connection to use with DMS. Database connection resources enable networking and connectivity for the source and target databases.
Estimated Lab Time: 20 minutes
### Objectives
In this lab, you will:
-* Create Registered Database for Source CDB
-* Create Registered Database for Source PDB
-* Create Registered Database for Target ADB
+* Create a database connection for Source CDB
+* Create a database connection for Source PDB
+* Create a database connection for Target ADB
* Create a Migration
### Prerequisites
@@ -25,20 +25,20 @@ In this lab, you will:
*Note: If you have a **Free Trial** account, when your Free Trial expires your account will be converted to an **Always Free** account. You will not be able to conduct Free Tier workshops unless the Always Free environment is available. **[Click here for the Free Tier FAQ page.](https://www.oracle.com/cloud/free/faq.html)***
-## Task 1: Create Registered Database for Source CDB
+## Task 1: Create a Database Connection for Source CDB
For this task you need the following info from previous steps:
* Source DB Private IP
* Source DB CDB Service Name
* Database Administrator Password
-1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Registered Databases**
+1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/registered-db.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
-2. Press **Register Database**
+2. Press **Create Connection**
- ![click Register Database](images/1-2.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourceCDB**
@@ -49,38 +49,36 @@ For this task you need the following info from previous steps:
- Database: **sourcedb**
- Connect String: Change existing string by replacing the qualified hostname with the **private IP** of the database node, for example:
- **10.0.0.3**:1521/sourcedb_iad158.sub12062328210.vcndmsla.oraclevcn.com
- - Subnet: Pick the Subnet that the DB is located in
+ - Subnet: Pick the Subnet that the DB is in
4. Press **Next**
- ![enter database details](images/1-4.png =50%x*)
+ ![Screenshot of register DB details and click next](images/create-db-next.png =50%x*)
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **system**
- - Database Administrator Password: <*Admin password*>
- - SSH Database Server Hostname: <*DB Node Private IP Address*>
- - SSH Private Key: Select private key file
- - SSH Username: **opc**
- - SSH Sudo Location: **/usr/bin/sudo**
+ - Initial load database username: **system**
+ - Initial load database password: <*Admin password*>
+ - Dont Check **Use different credentials for replication**
-6. Press **Register**
+
+6. Press **Create**
- ![enter connection details](images/1-6.png =50%x*)
+ ![Screenshot confirm DB connection cration](images/create-db-confirm-initial-load.png =40%x*)
-## Task 2: Create Registered Database for Source PDB
+## Task 2: Create Database Connection for Source PDB
For this task you need the following info from previous steps:
* Source DB Private IP
* Source DB PDB Service Name
* Database Administrator Password
-1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Registered Databases**
+1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/registered-db.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
-2. Press **Register Database**
+2. Press **Create connection**
- ![click Register Database](images/1-2.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **SourcePDB**
@@ -95,32 +93,30 @@ For this task you need the following info from previous steps:
4. Press **Next**
- ![database details for PDB](images/2-4.png =50%x*)
+ ![Screenshot of register db](images/create-db-next-second.png =50%x*)
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **system**
- - Database Administrator Password: <*Admin password*>
- - SSH Database Server Hostname: <*DB Node Private IP Address*>
- - SSH Private Key: Select **private** key file
- - SSH Username: **opc**
- - SSH Sudo Location: **/usr/bin/sudo**
+ - Initial load database username: **system**
+ - Initial load database password: <*Admin password*>
+ - Dont Check **Use different credentials for replication**
+
-6. Press **Register**
+6. Press **Create**
- ![connection details press register](images/1-6.png =50%x*)
+ ![Screenshot of confirm register DB](images/create-db-confirm-initial-load.png =40%x*)
-## Task 3: Create Registered Database for Target ADB
+## Task 3: Create Database Connection for Target ADB
For this task you need the following info from previous steps:
* Database Administrator Password
-1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), **Migration & Disaster Recovery > Database Migration > Registered Databases**
+1. In the OCI Console Menu ![menu hamburger](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Database Connections**
- ![registered database navigation](images/registered-db.png =90%x*)
+ ![registered database navigation](images/db-connection.png =50%x*)
-2. Press **Register Database**
+2. Press **Create connection**
- ![click Register Database](images/1-2.png =90%x*)
+ ![Screenshot of click register db](images/click-create-db.png =50%x*)
3. On the page Database Details, fill in the following entries, otherwise leave defaults:
- Name: **TargetATP**
@@ -131,43 +127,44 @@ For this task you need the following info from previous steps:
4. Press **Next**
- ![ATP database details](images/3-4.png =50%x*)
+ ![Screenshot of press next after entering details](images/target-press-next.png )
5. On the page Connection Details, fill in the following entries, otherwise leave defaults:
- - Database Administrator Username: **admin**
- - Database Administrator Password: <*Admin password*>
+ - Initial load database username: **admin**
+ - Initial load database password: <*Admin password*>
+ - Dont Check **Use different credentials for replication**
-6. Press **Register**
+6. Press **Create**
- ![connection details ATP](images/3-6.png =50%x*)
+ ![Screenshot of confirm db registration](images/confirm-target-connection-initial-load.png)
## Task 4: Create Migration
1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![create migration navigation](images/migration-create.png =90%x*)
+ ![create migration navigation](images/migration-create.png =50%x*)
2. Press **Create Migration**
- ![press create migration](images/2.png =90%x*)
+ ![Screenshot of press create migration](images/press-create-migration.png =50%x*)
3. On the page **Add Details**, fill in the following entries, otherwise leave defaults:
- Name: **TestMigration**
- Vault: **DMS_Vault**
- Encryption Key: **DMS_Key**
- ![create migration details](images/add-details.png =40%x*)
+ ![Screenshot to add vault details](images/add-details.png )
4. Press **Next**
5. On the page **Select Databases**, fill in the following entries, otherwise leave defaults:
- Source Database: **SourcePDB**
- *Check* Database is pluggable database (PDB)
- - Registered Container Database: **SourceCDB**
+ - Container Database connection: **SourceCDB**
- Target Database: **TargetATP**
- ![select databases](images/select-databases.png =40%x*)
+ ![Screenshot of source db selection](images/select-databases.png)
6. On the page **Migration Options**, fill in the following entries, otherwise leave defaults:
- In **Initial Load**, select **Datapump via Object Storage**
@@ -175,12 +172,14 @@ For this task you need the following info from previous steps:
- Export Directory Object:
- Name: **dumpdir**
- Path: **/u01/app/oracle/dumpdir**
- - *DO NOT Check* Use Online Replication
-
- ![complete migration creation](images/Test-migration.png =40%x*)
-
-
- 7. Press **Create**
+ - Source Database file system SSL wallet path, we manually downloaded the required certificates in a previous lab:
+ - **/u01/app/oracle/dumpdir/wallet**
+
+ - Dont Check **Use Online Replication**
+
+ - Press Create to initiate the Migration creation
+
+ ![Screenshot for migration options](images/test-migration-1-offline.png )
You may now [proceed to the next lab](#next).
diff --git a/dms/register-and-migrate/images/add-details.png b/dms/register-and-migrate/images/add-details.png
index 9e327ee9d..bce6e20e0 100644
Binary files a/dms/register-and-migrate/images/add-details.png and b/dms/register-and-migrate/images/add-details.png differ
diff --git a/dms/register-and-migrate/images/click-create-db.png b/dms/register-and-migrate/images/click-create-db.png
new file mode 100644
index 000000000..28c811501
Binary files /dev/null and b/dms/register-and-migrate/images/click-create-db.png differ
diff --git a/dms/register-and-migrate/images/confirm-target-connection-initial-load.png b/dms/register-and-migrate/images/confirm-target-connection-initial-load.png
new file mode 100644
index 000000000..e3ae36b7d
Binary files /dev/null and b/dms/register-and-migrate/images/confirm-target-connection-initial-load.png differ
diff --git a/dms/register-and-migrate/images/confirm-target-connection.png b/dms/register-and-migrate/images/confirm-target-connection.png
new file mode 100644
index 000000000..c7949df1a
Binary files /dev/null and b/dms/register-and-migrate/images/confirm-target-connection.png differ
diff --git a/dms/register-and-migrate/images/create-db-confirm-initial-load.png b/dms/register-and-migrate/images/create-db-confirm-initial-load.png
new file mode 100644
index 000000000..cf670c471
Binary files /dev/null and b/dms/register-and-migrate/images/create-db-confirm-initial-load.png differ
diff --git a/dms/register-and-migrate/images/create-db-next-second.png b/dms/register-and-migrate/images/create-db-next-second.png
new file mode 100644
index 000000000..058d87c32
Binary files /dev/null and b/dms/register-and-migrate/images/create-db-next-second.png differ
diff --git a/dms/register-and-migrate/images/create-db-next.png b/dms/register-and-migrate/images/create-db-next.png
new file mode 100644
index 000000000..3c3b392f6
Binary files /dev/null and b/dms/register-and-migrate/images/create-db-next.png differ
diff --git a/dms/register-and-migrate/images/db-connection.png b/dms/register-and-migrate/images/db-connection.png
new file mode 100644
index 000000000..9f1b4b523
Binary files /dev/null and b/dms/register-and-migrate/images/db-connection.png differ
diff --git a/dms/register-and-migrate/images/press-create-migration.png b/dms/register-and-migrate/images/press-create-migration.png
new file mode 100644
index 000000000..fd1135066
Binary files /dev/null and b/dms/register-and-migrate/images/press-create-migration.png differ
diff --git a/dms/register-and-migrate/images/select-databases.png b/dms/register-and-migrate/images/select-databases.png
index 9de1e6c1d..25363ce6b 100644
Binary files a/dms/register-and-migrate/images/select-databases.png and b/dms/register-and-migrate/images/select-databases.png differ
diff --git a/dms/register-and-migrate/images/target-press-next.png b/dms/register-and-migrate/images/target-press-next.png
new file mode 100644
index 000000000..3c8232282
Binary files /dev/null and b/dms/register-and-migrate/images/target-press-next.png differ
diff --git a/dms/register-and-migrate/images/test-migration-1-offline.png b/dms/register-and-migrate/images/test-migration-1-offline.png
new file mode 100644
index 000000000..0c16845fe
Binary files /dev/null and b/dms/register-and-migrate/images/test-migration-1-offline.png differ
diff --git a/dms/validate-and-run/images/click-phases.png b/dms/validate-and-run/images/click-phases.png
new file mode 100644
index 000000000..f086b104c
Binary files /dev/null and b/dms/validate-and-run/images/click-phases.png differ
diff --git a/dms/validate-and-run/images/press-validate.png b/dms/validate-and-run/images/press-validate.png
new file mode 100644
index 000000000..60d41dfcd
Binary files /dev/null and b/dms/validate-and-run/images/press-validate.png differ
diff --git a/dms/validate-and-run/images/select-testmigration.png b/dms/validate-and-run/images/select-testmigration.png
new file mode 100644
index 000000000..ced28c32a
Binary files /dev/null and b/dms/validate-and-run/images/select-testmigration.png differ
diff --git a/dms/validate-and-run/validate-migration.md b/dms/validate-and-run/validate-migration.md
index 3c2347044..e18c8be3a 100644
--- a/dms/validate-and-run/validate-migration.md
+++ b/dms/validate-and-run/validate-migration.md
@@ -27,13 +27,13 @@ In this lab, you will:
2. Select **TestMigration**
- ![click on testMigration](images/2.png =90%x*)
+ ![click on testMigration](images/select-testmigration.png)
3. If Migration is still being created, wait until Lifecycle State is Active
4. Press **Validate** button
- ![press validate](images/3.png =90%x*)
+ ![press validate](images/press-validate.png)
5. Click on **Jobs** in left-hand **Resources** list
@@ -43,7 +43,7 @@ In this lab, you will:
7. Click on **Phases** in left-hand **Resources** list
- ![click phases menu](images/5.png =20%x*)
+ ![click phases menu](images/click-phases.png =17%x*)
8. Phases will be shown and status will be updated as phases are completed. It can take 2 minutes before the first phase is shown.
![phases are displayed](images/Pump.png =90%x*)
@@ -58,11 +58,11 @@ In this lab, you will:
1. In the OCI Console Menu ![hamburger icon](images/hamburger.png =22x22), go to **Migration & Disaster Recovery > Database Migration > Migrations**
- ![migrations navigation](images/migration-create.png =90%x*)
+ ![create migration navigation](images/migration-create.png =50%x*)
2. Select **TestMigration**
- ![click on testmigration](images/2.png =90%x*)
+ ![Screenshot of select testmigration](images/select-testmigration.png =50%x*)
3. Press **Start** to begin the Migration. Please note, if a dialog box appears, press **Start** in the dialog box to begin the migration.
@@ -70,7 +70,7 @@ In this lab, you will:
4. Click on **Jobs** in left-hand **Resources** list
- 5. Click on most recent Evaluation Job
+ 5. Click on most recent Migration Job
6. Click on **Phases** in left-hand **Resources** list
diff --git a/heatwave-lakehouse/add-heatwave-cluster/add-heatwave-cluster.md b/heatwave-lakehouse/add-heatwave-cluster/add-heatwave-cluster.md
index 2c14c6cf5..f187fa619 100644
--- a/heatwave-lakehouse/add-heatwave-cluster/add-heatwave-cluster.md
+++ b/heatwave-lakehouse/add-heatwave-cluster/add-heatwave-cluster.md
@@ -66,20 +66,20 @@ In this lab, you will be guided through the following task:
*ERROR: Schema `mysql_customer_orders` already contains a table named customers*
- - c. Make sure the **mysql\_customer\_orders** schema was loaded
+ - c. Change to SQL mode
```bash
- show databases;
+ \sql
```
- ![Database Schema List](./images/list-schemas-after.png "list schemas second view")
-
- - d. Change to SQL mode
+ - d. Make sure the **mysql\_customer\_orders** schema was loaded
```bash
- \sql
+ show databases;
```
+ ![Database Schema List](./images/list-schemas-after.png "list schemas second view")
+
5. View the mysql\_customer\_orders total records per table in
```bash
diff --git a/heatwave-lakehouse/create-heatwave-vcn-db/create-heatwave-vcn-db.md b/heatwave-lakehouse/create-heatwave-vcn-db/create-heatwave-vcn-db.md
index d24c9ceb6..1fda78e96 100644
--- a/heatwave-lakehouse/create-heatwave-vcn-db/create-heatwave-vcn-db.md
+++ b/heatwave-lakehouse/create-heatwave-vcn-db/create-heatwave-vcn-db.md
@@ -261,14 +261,17 @@ In this lab, you will be guided through the following tasks:
```
![HeatWave add host](./images/mysql-host.png "mysql host ")
+14. Go to the Configuration tab. Click on Select a MySQL version: Select the latest MySQL version of the DB system.
-14. Select the Data Import tab.
+ ![Select mysql version](./images/mysql-configuration-version.png "Select mysql version")
-15. Use the Image below to identify your OCI Region.
+15. Select the Data Import tab.
+
+16. Use the Image below to identify your OCI Region.
![HeatWave Find Region](./images/regionSelector.png "region Selector")
-16. Click on your localized geographic area
+17. Click on your localized geographic area
## North America (NA)
@@ -309,27 +312,27 @@ In this lab, you will be guided through the following tasks:
```
-17. If your OCI Region is not listed in step 16, don't worry, You will be able to load the DB Data in Lab 4 Task 1. Please skip to step 19.
+18. If your OCI Region is not listed in step 16, don't worry, You will be able to load the DB Data in Lab 4 Task 1. Please skip to step 19.
-18. The Data Import Link entry should look like this:
+19. The Data Import Link entry should look like this:
![HeatWave PAR Import](./images/mysql-data-import.png "mysql data import ")
-19. Review **Create MySQL DB System** Screen
+20. Review **Create MySQL DB System** Screen
![HeatWave create button](./images/mysql-create-button.png "mysql create dbbutton")
Click the '**Create**' button
-20. The New MySQL DB System will be ready to use after a few minutes
+21. The New MySQL DB System will be ready to use after a few minutes
The state will be shown as 'Creating' during the creation
![HeatWave create state](./images/mysql-heatwave-creating.png "mysql heatwave creating ")
-21. The state 'Active' indicates that the DB System is ready for use
+22. The state 'Active' indicates that the DB System is ready for use
![HeatWave create complete](./images/mysql-heatwave-active.png"mysql heatwave active ")
-22. On **heatwave-db** Page,select the **Connections** tab, check and save the Endpoint (Private IP Address). Later, you will need this value to connect to the Heatwave DB using the MySQL Shell client tool.
+23. On **heatwave-db** Page,select the **Connections** tab, check and save the Endpoint (Private IP Address). Later, you will need this value to connect to the Heatwave DB using the MySQL Shell client tool.
![HeatWave create complete connection](./images/mysql-heatwave-connection-tab.png"mysql heatwave connection ")
You may now **proceed to the next lab**
diff --git a/heatwave-lakehouse/create-heatwave-vcn-db/images/mysql-configuration-version.png b/heatwave-lakehouse/create-heatwave-vcn-db/images/mysql-configuration-version.png
new file mode 100644
index 000000000..6d83a44bc
Binary files /dev/null and b/heatwave-lakehouse/create-heatwave-vcn-db/images/mysql-configuration-version.png differ
diff --git a/heatwave-lakehouse/create-lakehouse-files/create-lakehouse-files.md b/heatwave-lakehouse/create-lakehouse-files/create-lakehouse-files.md
index 0200890bc..890cb0f2f 100644
--- a/heatwave-lakehouse/create-lakehouse-files/create-lakehouse-files.md
+++ b/heatwave-lakehouse/create-lakehouse-files/create-lakehouse-files.md
@@ -14,7 +14,7 @@ A set of files have been created for you to use in this workshop. You will creat
- An Oracle Trial or Paid Cloud Account
- Some Experience with MySQL Shell
-- Completed Lab 3
+- Completed Lab 5
## Task 1: Download and unzip Sample files
@@ -41,19 +41,19 @@ A set of files have been created for you to use in this workshop. You will creat
3. Download sample files
```bash
- wget https://objectstorage.us-ashburn-1.oraclecloud.com/p/nnsIBVX1qztFmyAuwYIsZT2p7Z-tWBcuP9xqPCdND5LzRDIyBHYqv_8a26Z38Kqq/n/mysqlpm/b/plf_mysql_customer_orders/o/lakehouse/lakehouse-order.zip
+ wget https://objectstorage.us-ashburn-1.oraclecloud.com/p/11vOOD1Z73v4baInYk3QlKOOZWb1BMo4gIcogWrO0jS4GQ29yFaQxwW9Jl6ufOFm/n/mysqlpm/b/mysql_customer_orders/o/lakehouse/lakehouse-orders-v3.zip
```
-4. Unzip lakehouse-order.zip file which will generate folder datafiles with 4 files
+4. Unzip lakehouse-order.zip file which will generate folder data with 4 files
```bash
- unzip lakehouse-order.zip
+ unzip lakehouse-orders-v3.zip
```
-5. Go into the lakehouse/datafiles folder and list all of the files
+5. Go into the lakehouse/data folder and list all of the files
```bash
- cd ~/lakehouse/datafiles
+ cd ~/lakehouse/data
```
```bash
@@ -93,10 +93,10 @@ A set of files have been created for you to use in this workshop. You will creat
## Task 3: Add files into the Bucket using the saved PAR URL
-1. Go into the lakehouse/datafiles folder and list all of the files
+1. Go into the lakehouse/data folder and list all of the files
```bash
- cd ~/lakehouse/datafiles
+ cd ~/lakehouse/data
```
```bash
diff --git a/heatwave-lakehouse/create-lakehouse-files/images/datafiles-list.png b/heatwave-lakehouse/create-lakehouse-files/images/datafiles-list.png
index 972e43ca0..fa2fe2b99 100644
Binary files a/heatwave-lakehouse/create-lakehouse-files/images/datafiles-list.png and b/heatwave-lakehouse/create-lakehouse-files/images/datafiles-list.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/create-delivery-order.png b/heatwave-lakehouse/load-csv-data/images/create-delivery-order.png
index c15da55a7..40cd633b8 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/create-delivery-order.png and b/heatwave-lakehouse/load-csv-data/images/create-delivery-order.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/create-delivery-table.png b/heatwave-lakehouse/load-csv-data/images/create-delivery-table.png
index 68174b602..3db4082fe 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/create-delivery-table.png and b/heatwave-lakehouse/load-csv-data/images/create-delivery-table.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/create-table-no-fieldname.png b/heatwave-lakehouse/load-csv-data/images/create-table-no-fieldname.png
index 3b294f1db..f410fe0f2 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/create-table-no-fieldname.png and b/heatwave-lakehouse/load-csv-data/images/create-table-no-fieldname.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/load-delivery-table.png b/heatwave-lakehouse/load-csv-data/images/load-delivery-table.png
index 561d143ef..6acd44d69 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/load-delivery-table.png and b/heatwave-lakehouse/load-csv-data/images/load-delivery-table.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/load-script-dryrun.png b/heatwave-lakehouse/load-csv-data/images/load-script-dryrun.png
index c5adc3607..180c55005 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/load-script-dryrun.png and b/heatwave-lakehouse/load-csv-data/images/load-script-dryrun.png differ
diff --git a/heatwave-lakehouse/load-csv-data/images/set-table-example.png b/heatwave-lakehouse/load-csv-data/images/set-table-example.png
index 662085d2b..3a3f4ab82 100644
Binary files a/heatwave-lakehouse/load-csv-data/images/set-table-example.png and b/heatwave-lakehouse/load-csv-data/images/set-table-example.png differ
diff --git a/heatwave-lakehouse/load-csv-data/load-csv-data.md b/heatwave-lakehouse/load-csv-data/load-csv-data.md
index c992d4483..7688200dc 100644
--- a/heatwave-lakehouse/load-csv-data/load-csv-data.md
+++ b/heatwave-lakehouse/load-csv-data/load-csv-data.md
@@ -26,27 +26,28 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
## Task 1: Create the PAR Link for the "delivery_order" files
-1. To create a PAR URL
- - Go to menu **Storage —> Buckets**
- ![Bucket menu](./images/storage-bucket-menu.png "storage bucket menu")
+1. Create a PAR URL for all of the **order folder** objects with a prefix
- - Select **lakehouse-files —> order** folder.
-2. Select the first file —> **delivery-orders-1.csv** and click the three vertical dots.
-3. Click on **Create Pre-Authenticated Request**
+ - a. From your OCI console, navigate to your lakehouse-files bucket in OCI.
+ - b. Select the folder —> order and click the three vertical dots.
- ![delivery-orders-1.csv 3 dots](./images/storage-create-par-orders.png "storage create par orders")
+ ![Select folder](./images/storage-delivery-orders-folder.png "storage delivery order folder")
+
+ - c. Click on ‘Create Pre-Authenticated Request’
+ - d. Click to select the ‘Objects with prefix’ option under ‘PreAuthentcated Request Target’.
+ - e. Leave the ‘Access Type’ option as-is: ‘Permit object reads on those with the specified prefix’.
+ - g. Click to select the ‘Enable Object Listing’ checkbox.
+ - h. Click the ‘Create Pre-Authenticated Request’ button.
-4. The **Object** option will be pre-selected.
-5. Keep **Permit object reads** selected
-6. Kep the other options for **Access Type** unchanged.
-7. Click the **Create Pre-Authenticated Request** button.
+ ![Create Folder PAR](./images/storage-delivery-orders-folder-page.png "storage delivery order folder page")
- ![Create PAR](./images/storage-create-par-orders-page.png "storage create par orders page")
+ - i. Click the ‘Copy’ icon to copy the PAR URL.
+ - j. Save the generated PAR URL; you will need it later.
+ - k. You can test the URL out by pasting it in your browser. It should return output like this:
-8. Click the **Copy** icon to copy the PAR URL.
- ![Copy PAR](./images/storage-create-par-orders-page-copy.png "storage create par orders page copy")
+ ![List folder file](./images/storage-delivery-orders-folder-list.png "storage delivery order folder list")
-9. Save the generated PAR URL; you will need it in the next task
+2. Save the generated PAR URL; you will need it in the next task
## Task 2: Connect to your MySQL HeatWave system using Cloud Shell
@@ -92,7 +93,7 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
## Task 3: Run Autoload to infer the schema and estimate capacity required for the DELIVERY table in the Object Store
-1. Part of the DELIVERY information for orders is contained in the delivery-orders-1.csv file in object store for which we have created a PAR URL in the earlier task. In a later task, we will load the other files for the DELIVER_ORDERS table into MySQL HeatWave. Enter the following commands one by one and hit Enter.
+1. The DELIVERY information for orders is contained in the delivery-orders-*1*.csv files in object store for which we have created a PAR URL in the earlier task. Enter the following commands one by one and hit Enter.
2. This sets the schema we will load table data into. Don’t worry if this schema has not been created. Autopilot will generate the commands for you to create this schema if it doesn’t exist.
@@ -107,14 +108,16 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
"db_name": "mysql_customer_orders",
"tables": [{
"table_name": "delivery_orders",
- "dialect":
- {
- "format": "csv",
- "field_delimiter": "\\t",
- "record_delimiter": "\\n"
- },
- "file": [{"par": "(PAR URL)"}]
- }] }]';
+ "dialect": {
+ "format": "csv",
+ "field_delimiter": "\\t",
+ "record_delimiter": "\\r\\n",
+ "has_header": true,
+ "is_strict_mode": false},
+ "file": [{"par": "(PAR URL)"}]
+ }
+ ]}
+ ]';
```
- It should look like the following example (Be sure to include the PAR Link inside at of quotes("")):
@@ -124,7 +127,7 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
4. This command populates all the options needed by Autoload:
```bash
- SET @options = JSON_OBJECT('mode', 'dryrun', 'policy', 'disable_unsupported_columns', 'external_tables', CAST(@dl_tables AS JSON));
+ SET @options = JSON_OBJECT('mode', 'dryrun', 'policy', 'disable_unsupported_columns', 'external_tables', CAST(@dl_tables AS JSON));
```
5. Run this Autoload command:
@@ -147,7 +150,7 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
![Dryrun script](./images/load-script-dryrun.png "load script dryrun")
-8. The execution result conatins the SQL statements needed to create the table and then load this table data from the Object Store into HeatWave.
+8. The execution result contains the SQL statements needed to create the table and then load this table data from the Object Store into HeatWave.
![Create Table](./images/create-delivery-order.png "create delivery order")
@@ -155,23 +158,9 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
![autopilot create table with no field name](./images/create-table-no-fieldname.png "autopilot create table with no field name")
-10. Modify the **CREATE TABLE** command to replace the generic column names, such as **col\_1**, with descriptive column names. Use the following values:
-
- - `col_1 : orders_delivery`
- - `col_2 : order_id`
- - `col_3 : customer_id`
- - `col_4 : order_status`
- - `col_5 : store_id`
- - `col_6 : delivery_vendor_id`
- - `col_7 : estimated_time_hours`
-
-11. Your modified **CREATE TABLE** command should look like the following example:
-
- ![autopilot create table with field name](./images/create-table-fieldname.png "autopilot create table with field name")
-
-12. Execute the modified **CREATE TABLE** command to create the delivery_orders table.
+10. Execute the **CREATE TABLE** command to create the delivery_orders table.
-13. The create command and result should look lie this
+11. The create command and result should look lie this
![Delivery Table create](./images/create-delivery-table.png "create delivery table")
@@ -191,25 +180,21 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
ALTER TABLE `mysql_customer_orders`.`delivery_orders` SECONDARY_LOAD;
```
-3. Once Autoload completes,point to the schema
-
- ```bash
- use mysql_customer_orders
- ```
-
-4. Check the number of rows loaded into the table.
+3. Check the number of rows loaded into the table.
```bash
select count(*) from delivery_orders;
```
-5. View a sample of the data in the table.
+ The DELIVERY table has 34 million rows.
+
+4. View a sample of the data in the table.
```bash
select * from delivery_orders limit 5;
```
- a. Join the delivery_orders table with other table in the schema
+5. Join the delivery_orders table with other table in the schema
```bash
select o.* ,d.* from orders o
@@ -217,76 +202,10 @@ We will now load the DELIVERY_ORDERS table from the Object Store. This is a larg
where o.order_id = 93751524;
```
-6. Your output for steps 2 thru 5 should look like this:
-
+6. Output of steps 6 through 5
![Add data to table](./images/load-delivery-table.png "load delivery table")
-7. Your DELIVERY table is now ready to be used in queries with other tables. In the next lab, we will see how to load additional data for the DELIVERY table from the Object Store using different options.
-
-## Task 5: Load all data for DELIVERY table from Object Store
-
-The DELIVERY table contains data loaded from one file so far. If new data arrives as more files, we can load those files too. The first option is by specifying a list of the files in the table definition. The second option is by specifying a prefix and have all files with that prefix be source files for the DELIVERY table. The third option is by specifying the entire folder in the Object Store to be the source file for the DELIVERY table.
-
-We will use the second option which Loads the data by specifying a PAR URL for all objects with a prefix.
-
-1. First unload the DELIVERY table from HeatWave:
-
- ```bash
- ALTER TABLE delivery_orders SECONDARY_UNLOAD;
- ```
-
-2. Create a PAR URL for all objects with a prefix
-
- - a. From your OCI console, navigate to your lakehouse-files bucket in OCI.
- - b. Select the folder —> order and click the three vertical dots.
-
- ![Select folder](./images/storage-delivery-orders-folder.png "storage delivery order folder")
-
- - c. Click on ‘Create Pre-Authenticated Request’
- - d. Click to select the ‘Objects with prefix’ option under ‘PreAuthentcated Request Target’.
- - e. Leave the ‘Access Type’ option as-is: ‘Permit object reads on those with the specified prefix’.
- - g. Click to select the ‘Enable Object Listing’ checkbox.
- - h. Click the ‘Create Pre-Authenticated Request’ button.
-
- ![Create Folder PAR](./images/storage-delivery-orders-folder-page.png "storage delivery order folder page")
-
- - i. Click the ‘Copy’ icon to copy the PAR URL.
- - j. Save the generated PAR URL; you will need it later.
- - k. You can test the URL out by pasting it in your browser. It should return output like this:
-
- ![List folder file](./images/storage-delivery-orders-folder-list.png "storage delivery order folder list")
-
-3. Since we have already created the table, we will not run Autopilot again. Instead we will simply go ahead and change the table definition to point it to this new PAR URL as the table source.
-
-4. Copy this command and replace the **(PAR URL)** with the one you saved earlier. It will be the source for the DELIVERY table:
-
- ```bash
- ALTER TABLE `mysql_customer_orders`.`delivery_orders` ENGINE_ATTRIBUTE='{"file": [{"par": "(PAR URL)"}], "dialect": {"format": "csv", "field_delimiter": "\\t", "record_delimiter": "\\n"}}';
- ```
-
-5. Your command should look like the following example. Now Execute your modified command
-
- ![autopilot alter table](./images/alter-table.png "autopilot alter table")
- **Output**
-
- ![Add data to Table](./images/load-all-delivery-table.png "load all delivery table")
-
-6. Load data into the DELIVERY table:
-
- ```bash
- alter table delivery_orders secondary_load;
- ```
-
-7. View the number of rows in the DELIVERY table:
-
- ```bash
- select count(*) from delivery_orders;
- ```
-
- The DELIVERY table now has 34 million rows.
-8. Output of steps 6 and 7
-
-![Add data to tabel](./images/load-final-delivery-table.png "load final delivery table")
+7. Your DELIVERY table is now ready to be used in queries with other tables.
You may now **proceed to the next lab**
diff --git a/heatwave-lakehouse/workshops/freetier/manifest.json b/heatwave-lakehouse/workshops/freetier/manifest.json
index 8599d118d..af5149511 100644
--- a/heatwave-lakehouse/workshops/freetier/manifest.json
+++ b/heatwave-lakehouse/workshops/freetier/manifest.json
@@ -15,7 +15,7 @@
},
{
- "title": "Lab 1: Create Compartment, VCN and MySQL HeatWave DB System while loading DB Data",
+ "title": "Lab 1: Create Compartment, VCN and MySQL HeatWave DB System",
"filename": "../../create-heatwave-vcn-db/create-heatwave-vcn-db.md"
},
diff --git a/heatwave-lakehouse/workshops/ocw23-freetier/manifest.json b/heatwave-lakehouse/workshops/ocw23-freetier/manifest.json
index 2e08185af..060ad5125 100644
--- a/heatwave-lakehouse/workshops/ocw23-freetier/manifest.json
+++ b/heatwave-lakehouse/workshops/ocw23-freetier/manifest.json
@@ -15,7 +15,7 @@
},
{
- "title": "Lab 1: Create Compartment, VCN and MySQL HeatWave DB System while loading DB Data",
+ "title": "Lab 1: Create Compartment, VCN and MySQL HeatWave DB ",
"filename": "../../create-heatwave-vcn-db/create-heatwave-vcn-db.md"
},
diff --git a/heatwave-movie-stream/add-data-mysql/add-data-mysql.md b/heatwave-movie-stream/add-data-mysql/add-data-mysql.md
new file mode 100644
index 000000000..064d0a301
--- /dev/null
+++ b/heatwave-movie-stream/add-data-mysql/add-data-mysql.md
@@ -0,0 +1,262 @@
+# Add MovieLens data to MySQL HeatWave
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+Int this lab you will be guided into importing the data from the SQL files generated with Python in the previous lab. For that, we will need to create the movies schema and the tables item, user, and data where this imported data will reside.
+
+Click the following link for an overview of the MovieLens100k dataset files:
+
+- [README file for the MovieLens dataset](https://files.grouplens.org/datasets/movielens/ml-100k-README.txt).
+
+_Estimated Time:_ 15 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Creating the primary database and tables for the Movies: 'item', 'user' and 'data'
+- Sourcing the data into the newly created tables with MySQL Shell
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with Linux and Python
+- Completed Lab 4
+
+## Task 1: Add movies data to HeatWave
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. Connect To mysql shell. On command Line, use the following command:
+
+ ```bash
+ mysqlsh -uadmin -p -h 10.0.1... --sql
+ ```
+
+ ![MySQL Shell Connect](./images/mysql-shell-login.png " mysql shell login")
+
+3. List the schemas in your heatwave instance
+
+ ```bash
+ show databases;
+ ```
+
+ ![List Databse Schemas](./images/list-schemas-first.png "list schemas first")
+
+4. Create the movie database
+
+ Enter the following command at the prompt
+
+ ```bash
+ CREATE SCHEMA movies;
+ ```
+
+5. Use the movie database
+
+ Enter the following command at the prompt
+
+ ```bash
+ USE movies;
+ ```
+
+6. Define the External Tables to access the movielens data.
+
+ a. Enter the following command at the prompt. **Click on Reveal code block**
+
+
+ **_Reveal code block_**
+ ```bash
+
+ CREATE TABLE `item` (
+ `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
+ `item_id` int DEFAULT NULL,
+ `title` varchar(100) DEFAULT NULL,
+ `release_year` varchar(10) DEFAULT NULL,
+ `release_date` varchar(20) DEFAULT NULL,
+ `URL` varchar(250) DEFAULT NULL,
+ `genre_Unknown` int DEFAULT NULL,
+ `genre_Action` int DEFAULT NULL,
+ `genre_Adventure` int DEFAULT NULL,
+ `genre_Animation` int DEFAULT NULL,
+ `genre_Children` int DEFAULT NULL,
+ `genre_Comedy` int DEFAULT NULL,
+ `genre_Crime` int DEFAULT NULL,
+ `genre_Documentary` int DEFAULT NULL,
+ `genre_Drama` int DEFAULT NULL,
+ `genre_Fantasy` int DEFAULT NULL,
+ `genre_Filmnoir` int DEFAULT NULL,
+ `genre_Horror` int DEFAULT NULL,
+ `genre_Musical` int DEFAULT NULL,
+ `genre_Mystery` int DEFAULT NULL,
+ `genre_Romance` int DEFAULT NULL,
+ `genre_Scifi` int DEFAULT NULL,
+ `genre_Thriller` int DEFAULT NULL,
+ `genre_War` int DEFAULT NULL,
+ `genre_Western` int DEFAULT NULL,
+ PRIMARY KEY (`my_row_id`)
+ );
+
+ CREATE TABLE `user` ( `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
+ `user_id` int DEFAULT NULL,
+ `user_age` int DEFAULT NULL,
+ `user_gender` varchar(20) DEFAULT NULL,
+ `user_occupation` varchar(30) DEFAULT NULL,
+ `user_zipcode` varchar(30) DEFAULT NULL,
+ PRIMARY KEY (`my_row_id`)
+ );
+
+ CREATE TABLE `data0` (
+ `user_id` varchar(5) DEFAULT NULL,
+ `item_id` varchar(7) DEFAULT NULL,
+ `rating` int DEFAULT NULL
+ );
+
+
+ ```
+
+
+ b. Hit **ENTER** to execute the last command
+
+ ![create primary tables ](./images/primary-tables-create.png "primary-tables-create ")
+7. Source the SQL files into your tables with MySQL Shell
+
+ a. Make sure you are in the movie database
+
+ ```bash
+ USE movies;
+ ```
+
+ b. Source the files into their tables
+
+ Make sure to replace the path of the file with your actual path if it is not the same.
+ Enter the following command at the prompt
+
+ ```bash
+
+ SOURCE /home/opc/ml-100k/item.sql
+
+ SOURCE /home/opc/ml-100k/user.sql
+
+ SOURCE /home/opc/ml-100k/data.sql
+
+ ```
+ c. Hit **ENTER** to execute the last command
+
+ **This operation might take a couple of minutes**
+
+ You will see the the following result
+
+ ![source sql files output](./images/source-sql-data-output.png "source-sql-files-output ")
+
+8. Check the number of rows for every created table
+
+ a. Enter the following command to ensure the data was inserted correctly
+
+ ```bash
+
+ SELECT COUNT(*) FROM item;
+
+ SELECT COUNT(*) FROM user;
+
+ SELECT COUNT(*) FROM data0;
+
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ c. You should see the following resulting counts
+
+ ![row counts primary tables](./images/row-counts-primary-tables.png "row-counts-primary-tables ")
+
+9. Create two more data tables to be used by HeatWave AutoML
+
+ a.
+
+ ```bash
+ CREATE TABLE movies.data1 as select * from movies.data0;
+ INSERT INTO data1 (user_id, item_id, rating)
+ VALUES
+ (20, 23, 4),
+ (20, 5, 3),
+ (20, 546, 5),
+ (20, 920, 2),
+ (20, 63, 1),
+ (20, 755, 5),
+ (20, 885, 3),
+ (20, 91, 2),
+ (21, 768, 4),
+ (21, 119, 1),
+ (21, 168, 3),
+ (21, 434, 5),
+ (21, 247, 2),
+ (21, 1131, 2),
+ (21, 1002, 4);
+
+ CREATE TABLE movies.data2 as select * from movies.data1;
+ INSERT INTO data2 (user_id, item_id, rating)
+ VALUES
+ (20, 1432, 2),
+ (20, 543, 4),
+ (20, 1189, 1),
+ (21, 1653, 1),
+ (21, 814, 1),
+ (21, 1536, 1),
+ (150, 1293, 1),
+ (150, 2, 1),
+ (150, 7, 5),
+ (150, 160, 2),
+ (150, 34, 3),
+ (150, 333, 2),
+ (150, 555, 4),
+ (150, 777, 1),
+ (150, 888, 5);
+
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+10. Compare the number of rows in the data tables.
+
+ a. Enter the following command to compare the number of rows
+
+ ```bash
+
+ SELECT COUNT(*) FROM data0;
+
+ SELECT COUNT(*) FROM data1;
+
+ SELECT COUNT(*) FROM data2;
+
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ c. You should see the following resulting counts
+
+ ![row counts data tables](./images/row-counts-data-tables.png "row-counts-data-tables ")
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation ](https://docs.cloud.oracle.com/en-us/iaas/MySQL-database)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/heatwave-machine-learning.html)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
+
+- **Dataset** - F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets:
+History and Context. ACM Transactions on Interactive Intelligent
+Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages.
+DOI=http://dx.doi.org/10.1145/2827872
\ No newline at end of file
diff --git a/heatwave-movie-stream/add-data-mysql/images/bucket-detail.png b/heatwave-movie-stream/add-data-mysql/images/bucket-detail.png
new file mode 100644
index 000000000..1982e4aa4
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/bucket-detail.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/cloud-shell-connect.png b/heatwave-movie-stream/add-data-mysql/images/cloud-shell-connect.png
new file mode 100644
index 000000000..33d2758bf
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/cloud-shell-connect.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/cloud-storage-bucket.png b/heatwave-movie-stream/add-data-mysql/images/cloud-storage-bucket.png
new file mode 100644
index 000000000..721eae725
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/cloud-storage-bucket.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/data0-table-description.png b/heatwave-movie-stream/add-data-mysql/images/data0-table-description.png
new file mode 100644
index 000000000..de6a99de1
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/data0-table-description.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/db-list.png b/heatwave-movie-stream/add-data-mysql/images/db-list.png
new file mode 100644
index 000000000..4a3cd1444
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/db-list.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/lakehouse-create-table.png b/heatwave-movie-stream/add-data-mysql/images/lakehouse-create-table.png
new file mode 100644
index 000000000..828644050
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/lakehouse-create-table.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/list-schemas-after.png b/heatwave-movie-stream/add-data-mysql/images/list-schemas-after.png
new file mode 100644
index 000000000..897501565
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/list-schemas-after.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/list-schemas-first.png b/heatwave-movie-stream/add-data-mysql/images/list-schemas-first.png
new file mode 100644
index 000000000..0a720a9c8
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/list-schemas-first.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-lakehouse-enable.png b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-lakehouse-enable.png
new file mode 100644
index 000000000..156a91bfc
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-lakehouse-enable.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-load.png b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-load.png
new file mode 100644
index 000000000..f0b4e657f
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-load.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/mysql-shell-login.png b/heatwave-movie-stream/add-data-mysql/images/mysql-shell-login.png
new file mode 100644
index 000000000..86f70724c
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/mysql-shell-login.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/navigation-mysql-with-instance.png b/heatwave-movie-stream/add-data-mysql/images/navigation-mysql-with-instance.png
new file mode 100644
index 000000000..5fd2221ec
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/navigation-mysql-with-instance.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/primary-tables-create.png b/heatwave-movie-stream/add-data-mysql/images/primary-tables-create.png
new file mode 100644
index 000000000..30d1361ab
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/primary-tables-create.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/row-counts-data-tables.png b/heatwave-movie-stream/add-data-mysql/images/row-counts-data-tables.png
new file mode 100644
index 000000000..00ff90858
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/row-counts-data-tables.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/row-counts-primary-tables.png b/heatwave-movie-stream/add-data-mysql/images/row-counts-primary-tables.png
new file mode 100644
index 000000000..217a79df8
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/row-counts-primary-tables.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/source-sql-data-output.png b/heatwave-movie-stream/add-data-mysql/images/source-sql-data-output.png
new file mode 100644
index 000000000..4f12b948e
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/source-sql-data-output.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-bucket-menu.png b/heatwave-movie-stream/add-data-mysql/images/storage-bucket-menu.png
new file mode 100644
index 000000000..6de7222e6
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-bucket-menu.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page-copy.png b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page-copy.png
new file mode 100644
index 000000000..5be29d919
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page-copy.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page.png b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page.png
new file mode 100644
index 000000000..a43143c8e
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders-page.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders.png b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders.png
new file mode 100644
index 000000000..8796ff7e2
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-create-par-orders.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-list.png b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-list.png
new file mode 100644
index 000000000..5821f7a90
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-list.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-page.png b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-page.png
new file mode 100644
index 000000000..dace05ca6
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder-page.png differ
diff --git a/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder.png b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder.png
new file mode 100644
index 000000000..902ae95fd
Binary files /dev/null and b/heatwave-movie-stream/add-data-mysql/images/storage-delivery-orders-folder.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/apex-heatwave.md b/heatwave-movie-stream/apex-heatwave/apex-heatwave.md
new file mode 100644
index 000000000..427f71850
--- /dev/null
+++ b/heatwave-movie-stream/apex-heatwave/apex-heatwave.md
@@ -0,0 +1,331 @@
+# Create a Low Code Application with Oracle APEX and REST SERVICES for MySQL
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+The Oracle Database Development Tools team launched the Database Tools service in OCI providing instance web browser to create connections to the MySQL Database Service in OCI.
+
+Using APEX, developers can quickly develop and deploy compelling apps that solve real problems and provide immediate value. You don't need to be an expert in a vast array of technologies to deliver sophisticated solutions. Focus on solving the problem and let APEX take care of the rest. [https://apex.oracle.com/en/platform/why-oracle-apex/](https://apex.oracle.com/en/platform/why-oracle-apex/)
+
+**Tasks Support Guides**
+- [https://medium.com/oracledevs/get-insight-on-mysql-data-using-apex](https://medium.com/oracledevs/get-insight-on-mysql-data-using-apex-22-1-7fe613c76ca5)
+- [https://peterobrien.blog/2022/06/15/](https://peterobrien.blog/2022/06/15/)
+- [https://peterobrien.blog/2022/06/15/how-to-use-the-oracle-database-tools-service-to-provide-data-to-apex/](https://peterobrien.blog/2022/06/15/how-to-use-the-oracle-database-tools-service-to-provide-data-to-apex/)
+
+_Estimated Time:_ 30 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following task:
+
+- Setup Identity and Security tools and services
+- Configure a Private Connection
+- Create and configure an APEX Instance
+- Configure APEX Rest Service
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with OCI Console
+- Some Experience with Oracle Autonomous and Oracle APEX
+- Completed Lab 8
+
+## Task 1 Setup Identity & Security tools in OCI to Create a Secret
+
+1. From the OCI Menu, navigate to **Identity & Security** and click **Vault**
+
+ ![Identity & Security Vault](./images/OCI-menu-vault.png "OCI-menu-vault ")
+
+2. Create a Vault
+
+ a. Click **Create Vault**
+
+ ![Create Vault](./images/create-vault.png "create-vault ")
+
+ b. Select the movies compartment
+
+ c. Give the vault a name
+
+ ```bash
+ HW-DB
+ ```
+
+ d. Click **Create Vault**
+
+3. Create a Master Encryption Key
+
+ a. Click on the newly created Vault
+
+ b. Click **Create Key**
+
+ ![Create Master Encryption Key](./images/vault-menu-create-key.png "vault-menu-create-key ")
+
+ c. Select the movies compartment
+
+ d. Give the key a name
+
+ ```bash
+ HW-DB
+ ```
+
+ e. Leave the rest configurations in default values
+
+ ![Create Key Details](./images/create-key-details.png "create-key-details ")
+
+ f. Click **Create Key**
+
+4. Create a Secret
+
+ a. Click on **Secrets** to navigate to the secrets panel
+
+ ![Navigate to Secrets Panel](./images/navigate-secret-panel.png =60%x* "navigate-secret-panel ")
+
+ b. Click **Create Secret**
+
+ ![Create Secrets Panel](./images/create-secret-panel.png "create-secret-panel ")
+
+ c. Select the movies compartment
+
+ d. Give the secret a name
+
+ ```bash
+ HW-DB
+ ```
+
+ e. Select the created Encryption Key
+
+ f. In **Secret Contents**, write the password for the admin user created for your MySQL HeatWave DB System
+
+ ![Create Secrets details](./images/create-secret-details.png "create-secret-details ")
+
+ g. Leave the rest configurations in default values
+
+ h. Click **Create Secret**
+
+## Task 2 Configure a Private Connection
+
+1. From the OCI Menu, navigate to **Developer Services** and click **Connections**
+
+ ![Developer Services Connections](./images/oci-developer-services-menu-connections.png "oci-developer-services-menu-connections ")
+
+2. Create a Private Endpoint
+
+ a. Navigate to Private Endpoints and click **Create private endpoint**
+
+ ![Create Private Endpoint Panel](./images/create-private-endpoint.png "create-private-endpoint-panel ")
+
+ b. Give the Endpoint a name
+
+ ```bash
+ HW-MovieHub-endpoint
+ ```
+
+ c. Select the movies compartment
+
+ d. Select **Enter network information**
+
+ e. Select the **private subnet** from the movies compartment
+
+ ![Create Private Endpoint Details](./images/create-private-endpoint-details.png "create-private-endpoint-details ")
+
+ f. Click **Create**
+
+3. Create a Connection
+
+ a. Navigate to Connections and click **Create connection**
+
+ ![Create Connection Panel](./images/create-connection-panel.png "create-connection-panel ")
+
+ b. Give the Endpoint a name
+
+ ```bash
+ HW-MovieHub-Connection
+ ```
+
+ c. Select the movies compartment
+
+ d. Select **Select database** option
+
+ e. Select **MySQL Database** for Database cloud service
+
+ f. Introduce the MySQL DB System created administrator user
+
+ g. Select the created secret that contains the matching mysql password
+
+ ![Create Connection Details](./images/create-connection-details.png "create-connection-details ")
+
+ h. Click **Next** and **Create**
+
+## Task 3 Run SQL Worksheet
+
+1. From the OCI Menu, navigate to **Developer Services** and click **SQL Worksheet**
+
+ ![Developer Services SQL Worksheet](./images/OCI-developer-services-sql-worksheets.png "OCI-developer-services-sql-worksheets ")
+
+2. Select the movies compartment and the created **HW-MovieHub-Connection**
+
+3. You can run SQL queries, in the SQL Worksheet.
+
+ a. List the schemas
+
+ ```bash
+ SHOW SCHEMAS;
+ ```
+
+4. Get the MySQL Connection Endpoint URL
+
+ The OCI Services connect to the MySQL DB System though the created Connection. This Connection Endpoint Consists of a URL Pattern:
+
+ **Note** The pattern is `https://sql.dbtools.< region >.oci.oraclecloud.com/20201005/ords/< connection ocid >/_/sql`
+
+ **Example**
+
+ This URL can be also obtained by retrieving it from the network logs from the developer console in a Web Browser.
+
+ a. Open the Developer Console from your web browser. This can be done by right clicking on the page and clicking **inspect/inspect element**
+
+ ![inspect developer console](./images/inspect-developer-console.png =70%x* "inspect-developer-console ")
+
+ b. In the developer **console** tab, look up for **dbtools-sqldev__LogEvent**. There you can click on the object to see its details
+
+ ![inspect connection object url](./images/inspect-url-connection-endpoint.png "inspect-url-connection-endpoint ")
+
+ c. The Endpoint URL will be visible. Right Click to Copy the Link
+
+ ![inspect copy object url](./images/inspect-copy-url.png "inspect-copy-url ")
+
+ d. Notice the pattern of the URL
+
+ e. Save the Endpoint URL for later
+
+## Task 4 Create API Keys
+
+1. From the OCI Menu, navigate to **Identity & Security** and click **Domains**
+
+ ![oci identity security domains](./images/oci-identity-security-domains.png "oci-identity-security-domains ")
+
+2. Click on **Default** domain and navigate to **Users**
+
+ ![domains default user](./images/domains-default-user.png "domains-default-user ")
+
+3. Click on your current user
+
+ ![Create API Key](./images/user-panel-create-apikey.png "user-panel-create-apikey ")
+
+4. Click **Add API Key**
+
+5. Save the generated API Key Pair
+
+ ![Add API Key](./images/add-api-key.png "add-api-key ")
+
+6. Save the content of the Configuration file preview by **copying** it. Then click **Close**
+
+ ![Configuration file preview](./images/api-config-fingerprint.png "api-config-fingerprint ")
+
+ **Notice the Values for your Username, Tenancy, Region, Fingerprint**
+
+## Task 5 Create and Configure an APEX Instance
+
+1. Create and Launch APEX
+
+ a.
+ ![start apex deploy](./images/start-apex-deploy.png "start apex deploy ")
+ ![continue apex deploy](./images/continue-apex-deploy.png "continue apex deploy ")
+ b. Choose movies compartment and set APEX password
+ ![set apex password](./images/set-password-apex-deploy.png "set apex password")
+ ![completed apex deploy](./images/completed-apex-deploy.png "completed apex deploy")
+
+2. Create Workspace
+
+ a.
+ ![login apexd](./images/login-apexd.png "login apexd ")
+
+ ![create apex workspace](./images/create-apex-workspace.png "create apex workspace" )
+ b. Name the APEX workspace
+
+ ```bash
+ heatwave-movies
+ ```
+
+ c. Set an Admin user and password for the workspace
+ ![name apex workspace](./images/name-apex-workspace.png "name apex workspace")
+ d. Log out from APEX
+ ![apex logout](./images/apex-logout.png "apex logout")
+
+
+## Task 6 Create APEX Credentials
+
+1. Create Web Credentials
+
+ a. Log In to the APEX Workspace
+ ![Log in APEX workspace](./images/log-in-apex-workspace.png "log-in-apex-workspace ")
+ b. Navigate to the Workspace Utilities from the App Builder Menu
+ ![Workspace Utilities](./images/apex-menu-workspace-utilities.png "apex-menu-workspace-utilities ")
+ c. Click on **Web Credentials**
+ ![workspace utilities web credential](./images/workspace-utilities-web-credentials.png "workspace-utilities-web-credentials ")
+
+ d. You can obtain the OCID values and fingerprint from the **Configuration File Preview** generated with the API Key or retrieve them from the OCI Console. Open the Private Key file in a text editor to copy the content.
+
+ | Attributes | Value |
+ | --------| -------:|
+ | Name | mysqlheatwave |
+ | Static ID | mysqlheatwave |
+ | Authentication Type | Oracle Cloud Infraestructure (OCI) |
+ | OCI User ID | **< YourUserOCID >** |
+ | OCI Private Key | **< ContentOfYourSavedPrivateKey >** |
+ | OCI Tenancy ID | **< YourTenancyOCID >** |
+ | OCI Public Key Fingerprint | **< YourPublicKeyFingerprint >** |
+ | Valid for URLs | **< EndpointURL >** |
+ {: title="Web Credentials \| Attributes"}
+
+ e. Create new Web Credentials. Click **Create**
+ ![apex web credentials ](./images/apex-web-credentials.png "apex web credentials")
+
+## Task 7 Create APEX Rest Service
+
+1. Navigate to the Workspace Utilities from the App Builder Menu
+ ![Workspace Utilities](./images/apex-menu-workspace-utilities.png "apex-menu-workspace-utilities ")
+
+2. Click on **REST Enabled SQL Services**
+ ![workspace utilities rest services](./images/workspace-utilities-rest-services.png "workspace-utilities-rest-services ")
+
+3. Click **Create**
+
+ ![Create rest services](./images/rest-service-create.png "rest-service-create ")
+
+4. Give the REST service a name
+
+ ```bash
+ MovieHub-moviesdb
+ ```
+
+5. For Endpoint URL, Introduce the Endpoint URL **without** the "**`/_/sql`**" at the end. Notice the help message.
+
+6. Click **Next**
+
+7. Select the previously created credentials. Click **Create**
+
+ ![rest services credentials](./images/rest-service-credentials.png "rest-service-credentials ")
+
+8. If previous steps were performed correctly, you should see a successful connection. Select the default database **movies**
+
+ ![rest services successful connection](./images/rest-service-success.png "rest-service-success ")
+
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- How to use the Oracle Database Tools Service to provide MySQL data to APEX - [APEX and the MySQL Database Service](https://asktom.oracle.com/pls/apex/asktom.search?oh=18245)
+
+- [Oracle Autonomous Database Serverless Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/index.html#Oracle%C2%AE-Cloud)
+
+- [Using Web Services with Oracle APEX Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/apex-web-services.html#GUID-DA24C605-384D-4448-B73C-D00C02F5060E)
+
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
diff --git a/heatwave-movie-stream/apex-heatwave/images/OCI-developer-services-sql-worksheets.png b/heatwave-movie-stream/apex-heatwave/images/OCI-developer-services-sql-worksheets.png
new file mode 100644
index 000000000..641329fdd
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/OCI-developer-services-sql-worksheets.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/OCI-menu-vault.png b/heatwave-movie-stream/apex-heatwave/images/OCI-menu-vault.png
new file mode 100644
index 000000000..c0673b87a
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/OCI-menu-vault.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/add-api-key.png b/heatwave-movie-stream/apex-heatwave/images/add-api-key.png
new file mode 100644
index 000000000..ef8e8767a
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/add-api-key.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/apex-logout.png b/heatwave-movie-stream/apex-heatwave/images/apex-logout.png
new file mode 100644
index 000000000..cef03ca54
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/apex-logout.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/apex-menu-workspace-utilities.png b/heatwave-movie-stream/apex-heatwave/images/apex-menu-workspace-utilities.png
new file mode 100644
index 000000000..27f3198c7
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/apex-menu-workspace-utilities.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/apex-rest.png b/heatwave-movie-stream/apex-heatwave/images/apex-rest.png
new file mode 100644
index 000000000..ddd4872ee
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/apex-rest.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/apex-web-credentials.png b/heatwave-movie-stream/apex-heatwave/images/apex-web-credentials.png
new file mode 100644
index 000000000..538d2ce8a
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/apex-web-credentials.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/api-config-fingerprint.png b/heatwave-movie-stream/apex-heatwave/images/api-config-fingerprint.png
new file mode 100644
index 000000000..d75926c2e
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/api-config-fingerprint.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/completed-apex-deploy.png b/heatwave-movie-stream/apex-heatwave/images/completed-apex-deploy.png
new file mode 100644
index 000000000..74ab56640
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/completed-apex-deploy.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/continue-apex-deploy.png b/heatwave-movie-stream/apex-heatwave/images/continue-apex-deploy.png
new file mode 100644
index 000000000..9ad2950f9
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/continue-apex-deploy.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-apex-workspace.png b/heatwave-movie-stream/apex-heatwave/images/create-apex-workspace.png
new file mode 100644
index 000000000..ce4ae1b2a
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-apex-workspace.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-connection-details.png b/heatwave-movie-stream/apex-heatwave/images/create-connection-details.png
new file mode 100644
index 000000000..b89aa4229
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-connection-details.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-connection-panel.png b/heatwave-movie-stream/apex-heatwave/images/create-connection-panel.png
new file mode 100644
index 000000000..9fa69a91b
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-connection-panel.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-key-details.png b/heatwave-movie-stream/apex-heatwave/images/create-key-details.png
new file mode 100644
index 000000000..a268fc4ac
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-key-details.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint-details.png b/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint-details.png
new file mode 100644
index 000000000..e65fcfa97
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint-details.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint.png b/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint.png
new file mode 100644
index 000000000..c971f5f58
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-private-endpoint.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-secret-details.png b/heatwave-movie-stream/apex-heatwave/images/create-secret-details.png
new file mode 100644
index 000000000..7f4927fa8
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-secret-details.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-secret-panel.png b/heatwave-movie-stream/apex-heatwave/images/create-secret-panel.png
new file mode 100644
index 000000000..7e1ebf5e7
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-secret-panel.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/create-vault.png b/heatwave-movie-stream/apex-heatwave/images/create-vault.png
new file mode 100644
index 000000000..533ca111b
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/create-vault.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/domains-default-user.png b/heatwave-movie-stream/apex-heatwave/images/domains-default-user.png
new file mode 100644
index 000000000..ad1fda8a7
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/domains-default-user.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/get-mysql-endpoint.png b/heatwave-movie-stream/apex-heatwave/images/get-mysql-endpoint.png
new file mode 100644
index 000000000..c57e0d17f
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/get-mysql-endpoint.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/inspect-copy-url.png b/heatwave-movie-stream/apex-heatwave/images/inspect-copy-url.png
new file mode 100644
index 000000000..5d0e373a2
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/inspect-copy-url.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/inspect-developer-console.png b/heatwave-movie-stream/apex-heatwave/images/inspect-developer-console.png
new file mode 100644
index 000000000..aad8eef7e
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/inspect-developer-console.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/inspect-url-connection-endpoint.png b/heatwave-movie-stream/apex-heatwave/images/inspect-url-connection-endpoint.png
new file mode 100644
index 000000000..bbc81b89e
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/inspect-url-connection-endpoint.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/log-in-apex-workspace.png b/heatwave-movie-stream/apex-heatwave/images/log-in-apex-workspace.png
new file mode 100644
index 000000000..3ef3fa5ed
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/log-in-apex-workspace.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/login-apexd.png b/heatwave-movie-stream/apex-heatwave/images/login-apexd.png
new file mode 100644
index 000000000..98a824081
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/login-apexd.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/apex-heatwave/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/name-apex-workspace.png b/heatwave-movie-stream/apex-heatwave/images/name-apex-workspace.png
new file mode 100644
index 000000000..08560dbff
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/name-apex-workspace.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/navigate-secret-panel.png b/heatwave-movie-stream/apex-heatwave/images/navigate-secret-panel.png
new file mode 100644
index 000000000..3c26a2a37
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/navigate-secret-panel.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/oci-developer-services-menu-connections.png b/heatwave-movie-stream/apex-heatwave/images/oci-developer-services-menu-connections.png
new file mode 100644
index 000000000..9fc3b62ff
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/oci-developer-services-menu-connections.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/oci-identity-security-domains.png b/heatwave-movie-stream/apex-heatwave/images/oci-identity-security-domains.png
new file mode 100644
index 000000000..879ac6a4a
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/oci-identity-security-domains.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/rest-service-create.png b/heatwave-movie-stream/apex-heatwave/images/rest-service-create.png
new file mode 100644
index 000000000..b51933a43
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/rest-service-create.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/rest-service-credentials.png b/heatwave-movie-stream/apex-heatwave/images/rest-service-credentials.png
new file mode 100644
index 000000000..d99a6c645
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/rest-service-credentials.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/rest-service-success.png b/heatwave-movie-stream/apex-heatwave/images/rest-service-success.png
new file mode 100644
index 000000000..7bb742434
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/rest-service-success.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/set-password-apex-deploy.png b/heatwave-movie-stream/apex-heatwave/images/set-password-apex-deploy.png
new file mode 100644
index 000000000..b600fa29e
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/set-password-apex-deploy.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/sql-worksheet-page.png b/heatwave-movie-stream/apex-heatwave/images/sql-worksheet-page.png
new file mode 100644
index 000000000..9098eb980
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/sql-worksheet-page.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/start-apex-deploy.png b/heatwave-movie-stream/apex-heatwave/images/start-apex-deploy.png
new file mode 100644
index 000000000..6d69ff6eb
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/start-apex-deploy.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/user-panel-create-apikey.png b/heatwave-movie-stream/apex-heatwave/images/user-panel-create-apikey.png
new file mode 100644
index 000000000..49b91f5c3
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/user-panel-create-apikey.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/vault-menu-create-key.png b/heatwave-movie-stream/apex-heatwave/images/vault-menu-create-key.png
new file mode 100644
index 000000000..3445f7779
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/vault-menu-create-key.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-rest-services.png b/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-rest-services.png
new file mode 100644
index 000000000..dabcb95c1
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-rest-services.png differ
diff --git a/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-web-credentials.png b/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-web-credentials.png
new file mode 100644
index 000000000..dad6c7201
Binary files /dev/null and b/heatwave-movie-stream/apex-heatwave/images/workspace-utilities-web-credentials.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/app-configure-apex.md b/heatwave-movie-stream/app-configure-apex/app-configure-apex.md
new file mode 100644
index 000000000..22128c78a
--- /dev/null
+++ b/heatwave-movie-stream/app-configure-apex/app-configure-apex.md
@@ -0,0 +1,179 @@
+# Setup the APEX application and Workspace
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+MySQL HeatWave can easily be used for development tasks with existing Oracle services, such as Oracle Cloud Analytics. We will add 2 more applications to this LAMP server.
+
+
+_Estimated Lab Time:_ 10 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Download and import the sample application
+- Configure the newly imported application
+- Add users to the app
+- Configure the APEX Workspace
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Some Experience with Oracle Autonomous and Oracle APEX
+- Must Complete Lab 8
+- Must Complete Lab 9
+
+## Task 1: Download the sample application - MovieHub
+
+1. Download the MovieHub application template:
+
+ Click on this link to **Download file** [MovieHub.zip](https://objectstorage.us-phoenix-1.oraclecloud.com/p/p_59Wj-TSoegriKLewLSQEC3T7IBIkCllrs5ztNZ5TDvtbLGSu2RR4pH6u8oQ8J6/n/idazzjlcjqzj/b/bucket-images/o/MovieHub_V2.zip) to your local machine
+
+## Task 2: Import the sample application - MovieHub
+
+1. Connect to your APEX Workspace:
+
+ a. Connect to your APEX workspace
+
+ b. Go to App Builder
+
+ ![Connect to APEX , menu](./images/apex-workpace-menu.png "apex-workpace-menu ")
+
+ ![APEX App Builder](./images/apex-app-builder.png "apex-app-builder ")
+
+2. Import the MovieHub file
+
+ a. Click on Import
+
+ ![APEX Import](./images/apex-import-moviehub.png "apex-import-moviehub ")
+
+ b. Select the downloaded file **MovieHub.zip** . Click on **Next** two times
+
+ c. Click **Install Application**
+
+ ![APEX Import Install](./images/apex-import-install-moviehub.png "apex-import-install-moviehub ")
+
+ d. Click on Edit Application after the application ends installing
+
+ ![MovieHub App Installed](./images/apex-app-installed.png "apex-app-installed ")
+
+## Task 3: Modify the REST Enabled SQL Endpoint for the App
+
+The imported app will import a broken REST Enabled SQL Endpoint from the export source
+
+1. Navigate to **REST Enabled SQL**
+
+ a. Navigate to the Workspace Utilities from the App Builder Menu
+
+ ![Workspace Utilities](./images/apex-menu-workspace-utilities.png "apex-menu-workspace-utilities ")
+
+ b. Click on **REST Enabled SQL Services**
+
+ ![workspace utilities rest services](./images/workspace-utilities-rest-services.png "workspace-utilities-rest-services ")
+
+ c. Select the Endpoint that was imported "hw-endpoint-rest"
+
+ ![RESTful services endpoints](./images/restful-services-endpoints-menu.png "restful-services-endpoints-menu ")
+
+2. You can delete the **REST** resource or edit it
+
+ a. Edit the endpoint with your current endpoint (The connection endpoint that was previously created)
+
+ b. Edit the credentials and select your previously created credentials
+
+ c. Make sure the correct default database is selected
+
+ ![Edit RESTful resource](./images/restful-resource-edit.png "restful-resource-edit ")
+
+## Task 4: Add Users to the App
+
+As this is an imported app, your current workspace user will not have administration access to it
+
+1. Register an Administrator account
+
+ a. Navigate to Shared Components
+
+ b. Go to Application Access Control
+
+ c. Click on Add User Role Assignment. Create a user 'ADMIN' and assign **administrator role** to it. This administrator account would be referred as '**admin account**'
+
+ ![Add User Role Assignment for APEX user](./images/apex-add-role-assignment.png "apex-add-role-assignment ")
+
+2. Create a 'Public' Role Assignment to simulate the difference in application usage between an administrative account and a non-administrative user account.
+
+ a. Navigate to Shared Components
+
+ b. Go to Application Access Control
+
+ c. Click on Add User Role Assignment. Create a User Role Assignment, **Contributor role** and **Reader role**. This non administrative account would be referred as '**public account**'
+
+ ![Create User Role Assignment for APEX user](./images/apex-create-role-assignment-public.png "apex-create-role-assignment-public ")
+
+ d. You should have 2 roles like this
+
+ ![Role Assignment list](./images/apex-role-assignments-list.png "apex-role-assignments-list ")
+
+3. Create a 'Public' account in the Administration - Users And Groups configuration
+
+ a. In the APEX workspace. Click on the administration tab
+
+ b. Navigate to **Manage Users and Groups**
+
+ ![Administration tab list](./images/administration-tab-list.png =60%x* "administration-tab-list ")
+
+ c. Click **Create User** with **username** 'public'
+
+ ![Create Public User Workspace](./images/public-create-user.png =80%x* "public-create-user ")
+
+ d. Add an email address
+
+ e. Set a password
+
+ f. Assign all group assignments to the user
+
+ ![Create Public User Workspace 2](./images/public-create-user2.png =80%x* "public-create-user2 ")
+
+ g. Click **Create User**
+
+## Task 5 (BONUS): Increase the Web Service requests
+
+When using Web Services with Oracle Autonomous Database, there is a limit in the number of 50,000 outbound web service requests per APEX workspace in a rolling 24-hour period. If the limit of outbound web service calls is reached, the following SQL exception is raised on the subsequent request and the request is blocked:
+ORA-20001: You have exceeded the maximum number of web service requests per workspace. Please contact your administrator.
+
+You may want to increase this limit if it is being reached
+
+1. Navigate to your Autonomous Database in OCI Console
+
+2. Click on the dropdown menu Database Actions
+
+3. Select SQL
+
+ ![Autonomous actions menu](./images/autonomous-actions-menu-sql.png "autonomous-actions-menu-sql ")
+
+4. Run the query to increase the **MAX\_WEBSERVICE\_REQUESTS** limit
+
+ ```bash
+ BEGIN
+ APEX_INSTANCE_ADMIN.SET_PARAMETER('MAX_WEBSERVICE_REQUESTS', '250000');
+ COMMIT;
+ END;
+ /
+ ```
+
+ ![Increase Web services request limit Autonomous actions menu](./images/autonomous-sql-increase-limit.png "autonomous-sql-increase-limit ")
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Autonomous Database Serverless Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/index.html#Oracle%C2%AE-Cloud)
+- [Using Web Services with Oracle APEX Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/apex-web-services.html#GUID-DA24C605-384D-4448-B73C-D00C02F5060E)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
\ No newline at end of file
diff --git a/heatwave-movie-stream/app-configure-apex/images/administration-tab-list.png b/heatwave-movie-stream/app-configure-apex/images/administration-tab-list.png
new file mode 100644
index 000000000..ff7dd9e97
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/administration-tab-list.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/airport_web.png b/heatwave-movie-stream/app-configure-apex/images/airport_web.png
new file mode 100644
index 000000000..fbf68e06d
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/airport_web.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/airportdb-list.png b/heatwave-movie-stream/app-configure-apex/images/airportdb-list.png
new file mode 100644
index 000000000..3a5c136e1
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/airportdb-list.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-add-role-assignment.png b/heatwave-movie-stream/app-configure-apex/images/apex-add-role-assignment.png
new file mode 100644
index 000000000..387ebfddf
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-add-role-assignment.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-app-builder.png b/heatwave-movie-stream/app-configure-apex/images/apex-app-builder.png
new file mode 100644
index 000000000..b29bf7ace
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-app-builder.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-app-installed.png b/heatwave-movie-stream/app-configure-apex/images/apex-app-installed.png
new file mode 100644
index 000000000..5dde4f272
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-app-installed.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-create-role-assignment-public.png b/heatwave-movie-stream/app-configure-apex/images/apex-create-role-assignment-public.png
new file mode 100644
index 000000000..b72302793
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-create-role-assignment-public.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-import-install-moviehub.png b/heatwave-movie-stream/app-configure-apex/images/apex-import-install-moviehub.png
new file mode 100644
index 000000000..4e0de94f1
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-import-install-moviehub.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-import-moviehub.png b/heatwave-movie-stream/app-configure-apex/images/apex-import-moviehub.png
new file mode 100644
index 000000000..5ffade18f
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-import-moviehub.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-menu-workspace-utilities.png b/heatwave-movie-stream/app-configure-apex/images/apex-menu-workspace-utilities.png
new file mode 100644
index 000000000..27f3198c7
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-menu-workspace-utilities.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-role-assignments-list.png b/heatwave-movie-stream/app-configure-apex/images/apex-role-assignments-list.png
new file mode 100644
index 000000000..033867ad9
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-role-assignments-list.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-users-dashboard-list.png b/heatwave-movie-stream/app-configure-apex/images/apex-users-dashboard-list.png
new file mode 100644
index 000000000..8bc43920e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-users-dashboard-list.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/apex-workpace-menu.png b/heatwave-movie-stream/app-configure-apex/images/apex-workpace-menu.png
new file mode 100644
index 000000000..056c371f0
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/apex-workpace-menu.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/architecture-oac-heatwave.png b/heatwave-movie-stream/app-configure-apex/images/architecture-oac-heatwave.png
new file mode 100644
index 000000000..6bd011f25
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/architecture-oac-heatwave.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/autonomous-actions-menu-sql.png b/heatwave-movie-stream/app-configure-apex/images/autonomous-actions-menu-sql.png
new file mode 100644
index 000000000..a65548180
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/autonomous-actions-menu-sql.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/autonomous-sql-increase-limit.png b/heatwave-movie-stream/app-configure-apex/images/autonomous-sql-increase-limit.png
new file mode 100644
index 000000000..5c6c1d17e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/autonomous-sql-increase-limit.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open-large.png b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open-large.png
new file mode 100644
index 000000000..f257c72f1
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open-large.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open.png b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open.png
new file mode 100644
index 000000000..39bbdb27b
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-open.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/cloud-shell-setup.png b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-setup.png
new file mode 100644
index 000000000..02fb91563
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/cloud-shell-setup.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/cloud-shell.png b/heatwave-movie-stream/app-configure-apex/images/cloud-shell.png
new file mode 100644
index 000000000..07660235e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/cloud-shell.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/cloudshell-main.png b/heatwave-movie-stream/app-configure-apex/images/cloudshell-main.png
new file mode 100644
index 000000000..ffd53d616
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/cloudshell-main.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-active.png b/heatwave-movie-stream/app-configure-apex/images/compute-active.png
new file mode 100644
index 000000000..91b67ccb1
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-active.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-add-ssh-key.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-add-ssh-key.png
new file mode 100644
index 000000000..dcf34fd6f
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-add-ssh-key.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-boot-volume.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-boot-volume.png
new file mode 100644
index 000000000..1203f029e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-boot-volume.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-change-shape.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-change-shape.png
new file mode 100644
index 000000000..f0370b00b
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-change-shape.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-image.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-image.png
new file mode 100644
index 000000000..94e323a25
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-image.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-networking-select.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-networking-select.png
new file mode 100644
index 000000000..673dbac2e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-networking-select.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-networking.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-networking.png
new file mode 100644
index 000000000..96441f4f8
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-networking.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-security.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-security.png
new file mode 100644
index 000000000..1ad488ff6
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-security.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-create-select-shape.png b/heatwave-movie-stream/app-configure-apex/images/compute-create-select-shape.png
new file mode 100644
index 000000000..0aff3e9da
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-create-select-shape.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-menu-create-instance.png b/heatwave-movie-stream/app-configure-apex/images/compute-menu-create-instance.png
new file mode 100644
index 000000000..533124029
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-menu-create-instance.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/compute-provisioning.png b/heatwave-movie-stream/app-configure-apex/images/compute-provisioning.png
new file mode 100644
index 000000000..8f7988049
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/compute-provisioning.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/connect-first-signin.png b/heatwave-movie-stream/app-configure-apex/images/connect-first-signin.png
new file mode 100644
index 000000000..4af5b71ca
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/connect-first-signin.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/dbchart-copied.png b/heatwave-movie-stream/app-configure-apex/images/dbchart-copied.png
new file mode 100644
index 000000000..a6633eaf4
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/dbchart-copied.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/dbchart-open.png b/heatwave-movie-stream/app-configure-apex/images/dbchart-open.png
new file mode 100644
index 000000000..9546eeffb
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/dbchart-open.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/dbchart-select-all.png b/heatwave-movie-stream/app-configure-apex/images/dbchart-select-all.png
new file mode 100644
index 000000000..5c3bc4b5d
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/dbchart-select-all.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/heatwave-load-shell.png b/heatwave-movie-stream/app-configure-apex/images/heatwave-load-shell.png
new file mode 100644
index 000000000..33d2758bf
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-build-out.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-build-out.png
new file mode 100644
index 000000000..3b7930d89
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-build-out.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-data-execute.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-data-execute.png
new file mode 100644
index 000000000..2690219a0
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-data-execute.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-data.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-data.png
new file mode 100644
index 000000000..536e1dccb
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-data.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-out.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-out.png
new file mode 100644
index 000000000..680834593
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-out.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-table-out.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-table-out.png
new file mode 100644
index 000000000..612cfd79d
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-predict-table-out.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-ml-score-model-out.png b/heatwave-movie-stream/app-configure-apex/images/iris-ml-score-model-out.png
new file mode 100644
index 000000000..592ac043e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-ml-score-model-out.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/iris-web-php.png b/heatwave-movie-stream/app-configure-apex/images/iris-web-php.png
new file mode 100644
index 000000000..afde7c7c9
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/iris-web-php.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mydbchart-out.png b/heatwave-movie-stream/app-configure-apex/images/mydbchart-out.png
new file mode 100644
index 000000000..86815d3db
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mydbchart-out.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-create-in-progress.png b/heatwave-movie-stream/app-configure-apex/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..f5fada590
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-detail-active.png b/heatwave-movie-stream/app-configure-apex/images/mysql-detail-active.png
new file mode 100644
index 000000000..8b6f6bcdc
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-detail-ip.png b/heatwave-movie-stream/app-configure-apex/images/mysql-detail-ip.png
new file mode 100644
index 000000000..e22a147ca
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-detail-ip.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-endpoint-private-ip.png b/heatwave-movie-stream/app-configure-apex/images/mysql-endpoint-private-ip.png
new file mode 100644
index 000000000..d7a48c263
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-endpoint-private-ip.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo copy.jpg b/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo copy.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo copy.jpg differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-install-shell.png b/heatwave-movie-stream/app-configure-apex/images/mysql-install-shell.png
new file mode 100644
index 000000000..e9da02bab
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-install-shell.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-load-data.png b/heatwave-movie-stream/app-configure-apex/images/mysql-load-data.png
new file mode 100644
index 000000000..49a6b2e41
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-load-data.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/mysql-shell-first-connect.png b/heatwave-movie-stream/app-configure-apex/images/mysql-shell-first-connect.png
new file mode 100644
index 000000000..385608d9d
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/mysql-shell-first-connect.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/navigate-rest-enabled-sql.png b/heatwave-movie-stream/app-configure-apex/images/navigate-rest-enabled-sql.png
new file mode 100644
index 000000000..449e80666
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/navigate-rest-enabled-sql.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/navigation-compute-with-instance.png b/heatwave-movie-stream/app-configure-apex/images/navigation-compute-with-instance.png
new file mode 100644
index 000000000..1f584c8d2
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/navigation-compute-with-instance.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/navigation-compute.png b/heatwave-movie-stream/app-configure-apex/images/navigation-compute.png
new file mode 100644
index 000000000..2a68015f2
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/navigation-compute.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/navigation-mysql-with-instance.png b/heatwave-movie-stream/app-configure-apex/images/navigation-mysql-with-instance.png
new file mode 100644
index 000000000..654d9c7f7
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/navigation-mysql-with-instance.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key-compute-db.png b/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key-compute-db.png
new file mode 100644
index 000000000..7301c3770
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key-compute-db.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key.png b/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key.png
new file mode 100644
index 000000000..5631c3ee3
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/notepad-rsa-key.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/public-create-user.png b/heatwave-movie-stream/app-configure-apex/images/public-create-user.png
new file mode 100644
index 000000000..661378f1f
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/public-create-user.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/public-create-user2.png b/heatwave-movie-stream/app-configure-apex/images/public-create-user2.png
new file mode 100644
index 000000000..b7da07c3e
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/public-create-user2.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/restful-resource-edit.png b/heatwave-movie-stream/app-configure-apex/images/restful-resource-edit.png
new file mode 100644
index 000000000..2525e0626
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/restful-resource-edit.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/restful-services-endpoints-menu.png b/heatwave-movie-stream/app-configure-apex/images/restful-services-endpoints-menu.png
new file mode 100644
index 000000000..8ba2dd006
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/restful-services-endpoints-menu.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/shh-key-list.png b/heatwave-movie-stream/app-configure-apex/images/shh-key-list.png
new file mode 100644
index 000000000..7cab02ea6
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/shh-key-list.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/show-ml-data.png b/heatwave-movie-stream/app-configure-apex/images/show-ml-data.png
new file mode 100644
index 000000000..077b17826
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/show-ml-data.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/ssh-key-display-minimize.png b/heatwave-movie-stream/app-configure-apex/images/ssh-key-display-minimize.png
new file mode 100644
index 000000000..0ce418b59
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/ssh-key-display-minimize.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/ssh-key-display.png b/heatwave-movie-stream/app-configure-apex/images/ssh-key-display.png
new file mode 100644
index 000000000..33a54b68b
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/ssh-key-display.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/ssh-key-show.png b/heatwave-movie-stream/app-configure-apex/images/ssh-key-show.png
new file mode 100644
index 000000000..722879314
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/ssh-key-show.png differ
diff --git a/heatwave-movie-stream/app-configure-apex/images/workspace-utilities-rest-services.png b/heatwave-movie-stream/app-configure-apex/images/workspace-utilities-rest-services.png
new file mode 100644
index 000000000..dabcb95c1
Binary files /dev/null and b/heatwave-movie-stream/app-configure-apex/images/workspace-utilities-rest-services.png differ
diff --git a/heatwave-movie-stream/create-automl/create-automl.md b/heatwave-movie-stream/create-automl/create-automl.md
new file mode 100644
index 000000000..b9d4b4e73
--- /dev/null
+++ b/heatwave-movie-stream/create-automl/create-automl.md
@@ -0,0 +1,246 @@
+# Create and test HeatWave AutoML Recommender System
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+HeatWave ML makes it easy to use machine learning, whether you are a novice user or an experienced ML practitioner. You provide the data, and HeatWave AutoML analyzes the characteristics of the data and creates an optimized machine learning model that you can use to generate predictions and explanations. An ML model makes predictions by identifying patterns in your data and applying those patterns to unseen data. HeatWave ML explanations help you understand how predictions are made, such as which features of a dataset contribute most to a prediction.
+
+To load the movies data, perform the following steps to create and load the required schema and tables.
+
+After this step the data is stored in the MySQL HeatWave database in the following schema and tables:
+
+**movies:** The schema containing training and test dataset tables.
+
+**data0:** The training dataset for the ml movies\_model\_1. Includes feature columns (user\_id, item\_id, rating) where 'rating' is the target column.
+
+**data1:** The training dataset for the ml movies\_model\_2. Includes feature columns (user\_id, item\_id, rating) where 'rating' is the target column.
+
+**data2:** The training dataset for the ml movies\_model\_3. Includes feature columns (user\_id, item\_id, rating) where 'rating' is the target column.
+
+The three tables differ in the number of rows to simulate the behavior of the model trained when users "watch 15 and 30 more movies"
+
+_Estimated Time:_ 20 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following task:
+
+- Load Movies Data into HeatWave
+- Train ML models
+- Test the models with ML\_PREDICT\_ROW
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Completed Lab 5
+
+## Task 1: Connect with MySQL Shell:
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. On the command line, connect to MySQL using the MySQL Shell client tool with the following command:
+
+ ```bash
+ mysqlsh -uadmin -p -h 10.... -P3306 --sql
+ ```
+
+ ![Connect](./images/heatwave-load-shell.png "heatwave-load-shell ")
+
+## Task 2: Review the data tables
+
+1. To Review the data of your tables before training the model:
+
+ a. Enter the following command at the prompt
+
+ ```bash
+ USE movies;
+ ```
+
+ b. View the content of one of your Machine Learning tables (data0)
+
+ ```bash
+ DESC data0;
+ ```
+
+ ```bash
+ SELECT * FROM data0 LIMIT 5;
+ ```
+
+ ![data0 table description detail](./images/data0-table-description.png "data0-table-description ")
+
+## Task 3: Load the movie database to HeatWave Cluster
+
+1. Load the movie tables into the HeatWave cluster memory:
+
+ ```bash
+ CALL sys.heatwave_load(JSON_ARRAY('movies'), NULL);
+ ```
+
+## Task 4: Train one ML model with the original data table: data0
+
+
+1. Train the model using ML_TRAIN. Since this is a recommendation dataset, the recommendation task is specified to create a recommendation model:
+
+ a. Train the first recommendation model
+
+ ```bash
+ CALL sys.ML_TRAIN('movies.data0','rating',JSON_OBJECT('task','recommendation','items','item_id','users','user_id'), @movies_model_1);
+ ```
+
+ b. When the training operation finishes, the model handle is assigned to the @movies\_model\_1 session variable, and the model is stored in your model catalog. You can view the entry in your model catalog using the following query, where '**admin**' in ML\_SCHEMA\_admin.MODEL\_CATALOG is your MySQL account name:
+
+ ```bash
+ SELECT model_id, model_handle, train_table_name FROM ML_SCHEMA_admin.MODEL_CATALOG;
+ ```
+
+ c. The output should look like this, containing the list with your trained models
+
+ ![model trained 1, model catalog](./images/model-trained-model-catalog-1.png "model-1-trained-model-catalog ")
+
+2. Load the model into HeatWave ML using ML\_MODEL\_LOAD routine:
+
+ a. Reset model handle variable
+
+ ```bash
+ SET @movies_model_1=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1);
+ ```
+
+ b. A model must be loaded before you can use it. The model remains loaded until you unload it or the HeatWave Cluster is restarted.
+
+ ```bash
+ select @movies_model_1;
+ ```
+
+ ```bash
+ CALL sys.ML_MODEL_LOAD(@movies_model_1, NULL);
+ ```
+
+3. Test the model to predict the TOP 3 items recommended for a given user.
+
+ a. Predict a ROW with the movies\_model\_1 . Top 3 recommended items for the user '600'
+
+ ```bash
+
+ SELECT sys.ML_PREDICT_ROW('{"user_id":"600"}',@movies_model_1,JSON_OBJECT('recommend','items','topk',3));
+ ```
+
+ b. The trained models will NOT be identical. So the resulting predictions are expected to differ from this example. Output should look like this
+
+ ![ml model 1 predict row for user 600](./images/ml-model1-predict-row-user600.png "ml-model1-predict-row-user 600 ")
+
+## Task 5: Train two more ML models with the remaining data tables: data1 and data2
+
+1. Train the ML models:
+
+ a. Train the two remaining models. Hit **ENTER** to execute the last command
+
+ ```bash
+
+ CALL sys.ML_TRAIN('movies.data1','rating',JSON_OBJECT('task','recommendation','items','item_id','users','user_id'), @movies_model_2);
+
+ CALL sys.ML_TRAIN('movies.data2','rating',JSON_OBJECT('task','recommendation','items','item_id','users','user_id'), @movies_model_3);
+ ```
+
+ b. Make Sure the model handle variables are set correctly for every model. Hit **ENTER** to execute the last command
+
+ ```bash
+
+ SET @movies_model_1=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 2);
+
+ SET @movies_model_2=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 1);
+
+ SET @movies_model_3=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 0);
+
+ ```
+
+ c. You can view the MODEL\_CATALOG table again with the new models
+
+ ```bash
+ SELECT model_id, model_handle, train_table_name FROM ML_SCHEMA_admin.MODEL_CATALOG;
+ ```
+
+ Output should look like this:
+
+ ![models trained 3, model catalog](./images/models-trained-model-catalog-3.png "models-trained-model-catalog 3 ")
+
+ d. Compare the model\_handle with the variables values. There must be a matching variable for every model
+
+ ```bash
+
+ SELECT @movies_model_1;
+ SELECT @movies_model_2;
+ SELECT @movies_model_3;
+ ```
+
+ e. Hit **ENTER** to execute the last command
+
+ f. Load every model in memory before using them
+
+ ```bash
+
+ CALL sys.ML_MODEL_LOAD(@movies_model_1, NULL);
+ CALL sys.ML_MODEL_LOAD(@movies_model_2, NULL);
+ CALL sys.ML_MODEL_LOAD(@movies_model_3, NULL);
+ ```
+
+ g. Hit **ENTER** to execute the last command
+
+## Task 6: Predict individual ROWS with the different trained models
+
+1. Test the model to predict the TOP 3 items recommended for a given user.
+
+ a. Predict a ROW with each of the 3 models. Top 3 recommended items for the user '600'
+
+ ```bash
+
+ SELECT sys.ML_PREDICT_ROW('{"user_id":"600"}',@movies_model_1,JSON_OBJECT('recommend','items','topk',3));
+ SELECT sys.ML_PREDICT_ROW('{"user_id":"600"}',@movies_model_2,JSON_OBJECT('recommend','items','topk',3));
+ SELECT sys.ML_PREDICT_ROW('{"user_id":"600"}',@movies_model_3,JSON_OBJECT('recommend','items','topk',3));
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ c. The trained models will NOT be identical. So the resulting predictions are expected to differ from this example. Output should look like this
+
+ ![ml models 3 predict row for user 600](./images/ml-models3-predict-row-user600.png "ml-models3-predict-row-user 600 ")
+
+1. Test the model to predict the TOP 8 recommended users for different given items.
+
+ a. Predict multiple ROWS with a single model. Top 8 recommended users for the items '100' , '200', '300'
+
+ ```bash
+
+ SELECT sys.ML_PREDICT_ROW('{"item_id":"100"}',@movies_model_1,JSON_OBJECT('recommend','users','topk',8));
+ SELECT sys.ML_PREDICT_ROW('{"item_id":"200"}',@movies_model_1,JSON_OBJECT('recommend','users','topk',8));
+ SELECT sys.ML_PREDICT_ROW('{"item_id":"300"}',@movies_model_1,JSON_OBJECT('recommend','users','topk',8));
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ c. The trained models will NOT be identical. So the resulting predictions are expected to differ from this example. Output should look like this
+
+ ![ml model predict rows for different items](./images/ml-model-predict-row-items-users.png "ml-model-predict-row-items-users ")
+
+
+To avoid consuming too much space, it is good practice to unload a model when you are finished using it.
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.oracle.com/en-us/iaas/mysql-database/index.html)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/mys-hwaml-machine-learning.html)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
diff --git a/heatwave-movie-stream/create-automl/images/12dbdelete.png b/heatwave-movie-stream/create-automl/images/12dbdelete.png
new file mode 100644
index 000000000..63e42df36
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/12dbdelete.png differ
diff --git a/heatwave-movie-stream/create-automl/images/12dbdetail.png b/heatwave-movie-stream/create-automl/images/12dbdetail.png
new file mode 100644
index 000000000..ca3769ec2
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/12dbdetail.png differ
diff --git a/heatwave-movie-stream/create-automl/images/12dbmoreactions.png b/heatwave-movie-stream/create-automl/images/12dbmoreactions.png
new file mode 100644
index 000000000..d2387ebe8
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/12dbmoreactions.png differ
diff --git a/heatwave-movie-stream/create-automl/images/12main.png b/heatwave-movie-stream/create-automl/images/12main.png
new file mode 100644
index 000000000..b4a1fb01e
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/12main.png differ
diff --git a/heatwave-movie-stream/create-automl/images/data0-table-description.png b/heatwave-movie-stream/create-automl/images/data0-table-description.png
new file mode 100644
index 000000000..de6a99de1
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/data0-table-description.png differ
diff --git a/heatwave-movie-stream/create-automl/images/heatwave-load-shell.png b/heatwave-movie-stream/create-automl/images/heatwave-load-shell.png
new file mode 100644
index 000000000..04d2ac581
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-build-out.png b/heatwave-movie-stream/create-automl/images/iris-ml-build-out.png
new file mode 100644
index 000000000..3b7930d89
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-build-out.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-data-execute.png b/heatwave-movie-stream/create-automl/images/iris-ml-data-execute.png
new file mode 100644
index 000000000..2690219a0
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-data-execute.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-data.png b/heatwave-movie-stream/create-automl/images/iris-ml-data.png
new file mode 100644
index 000000000..536e1dccb
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-data.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-predict-out.png b/heatwave-movie-stream/create-automl/images/iris-ml-predict-out.png
new file mode 100644
index 000000000..680834593
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-predict-out.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-predict-table-out.png b/heatwave-movie-stream/create-automl/images/iris-ml-predict-table-out.png
new file mode 100644
index 000000000..612cfd79d
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-predict-table-out.png differ
diff --git a/heatwave-movie-stream/create-automl/images/iris-ml-score-model-out.png b/heatwave-movie-stream/create-automl/images/iris-ml-score-model-out.png
new file mode 100644
index 000000000..592ac043e
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/iris-ml-score-model-out.png differ
diff --git a/heatwave-movie-stream/create-automl/images/ml-model-predict-row-items-users.png b/heatwave-movie-stream/create-automl/images/ml-model-predict-row-items-users.png
new file mode 100644
index 000000000..7109a56c6
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/ml-model-predict-row-items-users.png differ
diff --git a/heatwave-movie-stream/create-automl/images/ml-model1-predict-row-user600.png b/heatwave-movie-stream/create-automl/images/ml-model1-predict-row-user600.png
new file mode 100644
index 000000000..061b66111
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/ml-model1-predict-row-user600.png differ
diff --git a/heatwave-movie-stream/create-automl/images/ml-models3-predict-row-user600.png b/heatwave-movie-stream/create-automl/images/ml-models3-predict-row-user600.png
new file mode 100644
index 000000000..102605759
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/ml-models3-predict-row-user600.png differ
diff --git a/heatwave-movie-stream/create-automl/images/model-trained-model-catalog-1.png b/heatwave-movie-stream/create-automl/images/model-trained-model-catalog-1.png
new file mode 100644
index 000000000..2e3441ed4
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/model-trained-model-catalog-1.png differ
diff --git a/heatwave-movie-stream/create-automl/images/models-trained-model-catalog-3.png b/heatwave-movie-stream/create-automl/images/models-trained-model-catalog-3.png
new file mode 100644
index 000000000..127a95c05
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/models-trained-model-catalog-3.png differ
diff --git a/heatwave-movie-stream/create-automl/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/create-automl/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/create-automl/images/show-ml-data.png b/heatwave-movie-stream/create-automl/images/show-ml-data.png
new file mode 100644
index 000000000..077b17826
Binary files /dev/null and b/heatwave-movie-stream/create-automl/images/show-ml-data.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/create-bastion-with-python.md b/heatwave-movie-stream/create-bastion-with-python/create-bastion-with-python.md
new file mode 100644
index 000000000..021374cbf
--- /dev/null
+++ b/heatwave-movie-stream/create-bastion-with-python/create-bastion-with-python.md
@@ -0,0 +1,236 @@
+# Create Bastion Server for MySQL Data
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+When working in the cloud, there are often times when your servers and services are not exposed to the public internet. MySQL HeatWave on OCI is an example of a service that is only accessible through private networks. Since the service is fully managed, we keep it siloed away from the internet to help protect your data from potential attacks and vulnerabilities. It’s a good practice to limit resource exposure as much as possible, but at some point, you’ll likely want to connect to those resources. That’s where Compute Instance, also known as a Bastion host, enters the picture. This Compute Instance Bastion Host is a resource that sits between the private resource and the endpoint which requires access to the private network and can act as a “jump box” to allow you to log in to the private resource through protocols like SSH. This bastion host requires a Virtual Cloud Network and Compute Instance to connect with the MySQL DB Systems.
+
+You will also install Python and Pandas Module; and MySQL Shell on this Bastion Compute Instance. It will be used as a Development Server to Download, Transform and Import data into MySQL HeatWave. New applications can also be created with other software stacks and connect to your MySQL HeatWave system in this bastion.
+
+_Estimated Lab Time:_ 15 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Create SSH Key on OCI Cloud
+- Create Bastion Compute Instance
+- Install MySQL Shell on the Compute Instance
+- Connect to MySQL Database System
+- Install Python and Pandas Module
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Must Complete Lab 2 - be **Active**
+
+## Task 1: Create SSH Key on OCI Cloud Shell
+
+The Cloud Shell machine is a small virtual machine running a Bash shell which you access through the Oracle Cloud Console (Homepage). You will start the Cloud Shell and generate a SSH Key to use for the Bastion session.
+
+1. To start the Oracle Cloud shell, go to your Cloud console and click the cloud shell icon at the top right of the page. This will open the Cloud Shell in the browser, the first time it takes some time to generate it.
+
+ cloudshell-main
+
+ ![cloud shell main](./images/cloud-shell.png "cloud shell main " )
+
+ ![cloud shell button](./images/cloud-shell-setup.png "cloud shell button " )
+
+ ![open cloud shell](./images/cloud-shell-open.png "open cloud shell" )
+
+ _Note: You can use the icons in the upper right corner of the Cloud Shell window to minimize, maximize, restart, and close your Cloud Shell session._
+
+2. Once the cloud shell has started, create the SSH Key using the following command:
+
+ ```bash
+ ssh-keygen -t rsa
+ ```
+
+ Press enter for each question.
+
+ Here is what it should look like.
+
+ ![ssh key](./images/ssh-key-show.png "ssh key show")
+
+3. The public and private SSH keys are stored in ~/.ssh/id_rsa.pub.
+
+4. Examine the two files that you just created.
+
+ ```bash
+ cd .ssh
+ ```
+
+ ```bash
+ ls
+ ```
+
+ ![ssh key list ](./images/shh-key-list.png "shh key list")
+
+ Note in the output there are two files, a *private key:* `id_rsa` and a *public key:* `id_rsa.pub`. Keep the private key safe and don't share its content with anyone. The public key will be needed for various activities and can be uploaded to certain systems as well as copied and pasted to facilitate secure communications in the cloud.
+
+## Task 2: Create Compute instance
+
+You will need a compute Instance to connect to your brand new MySQL database.
+
+1. Before creating the Compute instance open a notepad
+
+2. Do the followings steps to copy the public SSH key to the notepad
+
+ Open the Cloud shell
+ ![open cloud shell large](./images/cloud-shell-open-large.png "open cloud shell large ")
+
+ Enter the following command
+
+ ```bash
+ cat ~/.ssh/id_rsa.pub
+ ```
+
+ ![ssh key display](./images/ssh-key-display.png "ssh key display ")
+
+3. Copy the id_rsa.pub content the notepad
+
+ Your notepad should look like this
+ ![show ssh key](./images/notepad-rsa-key.png "show ssh key")
+
+4. Minimize cloud shell
+
+ ![minimize cloud shell](./images/ssh-key-display-minimize.png "minimize cloud shell")
+
+5. To launch a Linux Compute instance, go to
+ Navigation Menu
+ Compute
+ Instances
+ ![navigation compute](./images/navigation-compute.png "navigation compute")
+
+6. On Instances in **(movies)** Compartment, click **Create Instance**
+ ![compute menu create instance](./images/compute-menu-create-instance.png "compute menu create instance ")
+
+7. On Create Compute Instance
+
+ Enter Name
+
+ ```bash
+ HEATWAVE-Client
+ ```
+
+8. Make sure **(movies)** compartment is selected
+
+9. On Placement, keep the selected Availability Domain
+
+10. On Security, keep the default
+
+ - Shielded instance: Disabled
+ - Confidential computing:Disabled
+
+ ![compute create security](./images/compute-create-security.png "compute create security ")
+
+11. On Image keep the selected Image, Oracle Linux 8 and click Edit
+
+ ![compute create image](./images/compute-create-image.png "compute create image ")
+
+12. Click Change Shape
+
+ ![compute create change shape](./images/compute-create-change-shape.png "compute create change shape")
+
+13. Select Instance Shape: VM.Standard.E2.2
+
+ ![compute create select shape](./images/compute-create-select-shape.png "compute create select shape")
+
+14. On Networking, click Edit
+
+ ![compute create networking](./images/compute-create-networking.png "compute create networking ")
+
+15. Make sure **HEATWAVE-VCN** and and **public subnet-HEATWAVE-VCN** are selected. Keep Public IPV4 address **Assign..** default
+
+ ![compute create networking](./images/compute-create-networking-select.png "compute create networking ")
+
+16. On Add SSH keys, paste the public key from the notepad.
+
+ ![compute create add ssh key](./images/compute-create-add-ssh-key.png "compute create add ssh key ")
+
+17. Keep Boot Volume default and Click **Create** button to finish creating your Compute Instance.
+
+ ![compute create boot volue](./images/compute-create-boot-volume.png "compute create boot volume")
+
+18. The New Virtual Machine will be ready to use after a few minutes. The state will be shown as 'Provisioning' during the creation
+ ![compute provisioning](./images/compute-provisioning.png "compute provisioning ")
+
+19. The state 'Running' indicates that the Virtual Machine is ready to use.
+
+ ![compute active](./images/compute-active.png "compute active")
+
+## Task 3: Connect to Bastion Compute and Install MySQL Shell
+
+1. Copy the public IP address of the active Compute Instance to your notepad
+
+ - Go to Navigation Menu
+ - Compute
+ - Instances
+ - Copy **Public IP**
+ ![navigation compute with instance](./images/navigation-compute-with-instance.png "navigation compute with instance ")
+
+2. Go to Cloud shell to SSH into the new Compute Instance
+
+ Enter the username **opc** and the Public **IP Address**.
+
+ Note: The **HEATWAVE-Client** shows the Public IP Address as mentioned on TASK 3: Step 1
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+ For the **Are you sure you want to continue connecting (yes/no)?**
+ - answer **yes**
+
+ ![connect signin](./images/connect-first-signin.png "connect signin ")
+
+3. You will need a MySQL client tool to connect to your new MySQL HeatWave System from the Batien.
+
+ Install MySQL Shell with the following command (enter y for each question)
+
+ **[opc@…]$**
+
+ ```bash
+ sudo yum install mysql-shell -y
+ ```
+
+ ![mysql shell install](./images/mysql-install-shell.png "mysql shell install ")
+
+## Task 4: Install Python and Pandas
+
+1. Install Python and Pandas
+
+ a. Install Python
+
+ ```bash
+ sudo yum install python3
+ ```
+
+ ![python3 install](./images/python3-install.png "python3 install ")
+ b. Install Pandas
+
+ ```bash
+ sudo pip3 install pandas
+ ```
+
+ ![python3 pandas install](./images/pandas-python-install.png "python3 pandas install ")
+ c. Test Python is working
+
+ ```bash
+ python3
+ ```
+
+ d. Exit Python with **Ctrl + Z**
+
+
+You may now **proceed to the next lab**
+
+## Acknowledgements
+
+- **Author** - Perside Foster, MySQL Principal Solution Engineering
+- **Contributors** - Mandy Pang, MySQL Principal Product Manager, Nick Mader, MySQL Global Channel Enablement & Strategy Manager, Cristian Aguilar, MySQL Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
\ No newline at end of file
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/airport_web.png b/heatwave-movie-stream/create-bastion-with-python/images/airport_web.png
new file mode 100644
index 000000000..fbf68e06d
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/airport_web.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/airportdb-list.png b/heatwave-movie-stream/create-bastion-with-python/images/airportdb-list.png
new file mode 100644
index 000000000..3a5c136e1
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/airportdb-list.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/architecture-oac-heatwave.png b/heatwave-movie-stream/create-bastion-with-python/images/architecture-oac-heatwave.png
new file mode 100644
index 000000000..6bd011f25
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/architecture-oac-heatwave.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open-large.png b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open-large.png
new file mode 100644
index 000000000..f257c72f1
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open-large.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open.png b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open.png
new file mode 100644
index 000000000..39bbdb27b
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-open.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-setup.png b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-setup.png
new file mode 100644
index 000000000..02fb91563
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell-setup.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell.png b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell.png
new file mode 100644
index 000000000..07660235e
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/cloud-shell.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/cloudshell-main.png b/heatwave-movie-stream/create-bastion-with-python/images/cloudshell-main.png
new file mode 100644
index 000000000..ffd53d616
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/cloudshell-main.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-active.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-active.png
new file mode 100644
index 000000000..91b67ccb1
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-active.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-add-ssh-key.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-add-ssh-key.png
new file mode 100644
index 000000000..dcf34fd6f
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-add-ssh-key.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-boot-volume.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-boot-volume.png
new file mode 100644
index 000000000..1203f029e
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-boot-volume.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-change-shape.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-change-shape.png
new file mode 100644
index 000000000..f0370b00b
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-change-shape.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-image.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-image.png
new file mode 100644
index 000000000..94e323a25
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-image.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking-select.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking-select.png
new file mode 100644
index 000000000..673dbac2e
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking-select.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking.png
new file mode 100644
index 000000000..96441f4f8
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-networking.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-security.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-security.png
new file mode 100644
index 000000000..a51d3b4f9
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-security.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-create-select-shape.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-select-shape.png
new file mode 100644
index 000000000..0aff3e9da
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-create-select-shape.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-menu-create-instance.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-menu-create-instance.png
new file mode 100644
index 000000000..dc44ffb5d
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-menu-create-instance.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/compute-provisioning.png b/heatwave-movie-stream/create-bastion-with-python/images/compute-provisioning.png
new file mode 100644
index 000000000..8f7988049
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/compute-provisioning.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/conn-php.png b/heatwave-movie-stream/create-bastion-with-python/images/conn-php.png
new file mode 100644
index 000000000..6f37744c0
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/conn-php.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/connect-first-signin.png b/heatwave-movie-stream/create-bastion-with-python/images/connect-first-signin.png
new file mode 100644
index 000000000..4af5b71ca
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/connect-first-signin.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/dbchart-copied.png b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-copied.png
new file mode 100644
index 000000000..a6633eaf4
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-copied.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/dbchart-open.png b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-open.png
new file mode 100644
index 000000000..9546eeffb
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-open.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/dbchart-select-all.png b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-select-all.png
new file mode 100644
index 000000000..5c3bc4b5d
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/dbchart-select-all.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/heatwave-load-shell.png b/heatwave-movie-stream/create-bastion-with-python/images/heatwave-load-shell.png
new file mode 100644
index 000000000..33d2758bf
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-build-out.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-build-out.png
new file mode 100644
index 000000000..3b7930d89
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-build-out.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data-execute.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data-execute.png
new file mode 100644
index 000000000..2690219a0
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data-execute.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data.png
new file mode 100644
index 000000000..536e1dccb
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-data.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-out.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-out.png
new file mode 100644
index 000000000..680834593
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-out.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-table-out.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-table-out.png
new file mode 100644
index 000000000..612cfd79d
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-predict-table-out.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-score-model-out.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-score-model-out.png
new file mode 100644
index 000000000..592ac043e
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-ml-score-model-out.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/iris-web-php.png b/heatwave-movie-stream/create-bastion-with-python/images/iris-web-php.png
new file mode 100644
index 000000000..afde7c7c9
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/iris-web-php.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/list-oltp-files.png b/heatwave-movie-stream/create-bastion-with-python/images/list-oltp-files.png
new file mode 100644
index 000000000..9b6d9ba70
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/list-oltp-files.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mydbchart-out.png b/heatwave-movie-stream/create-bastion-with-python/images/mydbchart-out.png
new file mode 100644
index 000000000..86815d3db
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mydbchart-out.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-create-in-progress.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..f5fada590
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-active.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-active.png
new file mode 100644
index 000000000..dab9bc249
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-endpoint.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-endpoint.png
new file mode 100644
index 000000000..1db6bebdb
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-endpoint.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-ip.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-ip.png
new file mode 100644
index 000000000..e22a147ca
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-detail-ip.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-endpoint-private-ip.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-endpoint-private-ip.png
new file mode 100644
index 000000000..d7a48c263
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-endpoint-private-ip.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo copy.jpg b/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo copy.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo copy.jpg differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-install-shell.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-install-shell.png
new file mode 100644
index 000000000..e9da02bab
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-install-shell.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-load-data.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-load-data.png
new file mode 100644
index 000000000..49a6b2e41
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-load-data.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/mysql-shell-first-connect.png b/heatwave-movie-stream/create-bastion-with-python/images/mysql-shell-first-connect.png
new file mode 100644
index 000000000..385608d9d
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/mysql-shell-first-connect.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute-with-instance.png b/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute-with-instance.png
new file mode 100644
index 000000000..1f584c8d2
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute-with-instance.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute.png b/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute.png
new file mode 100644
index 000000000..2a68015f2
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/navigation-compute.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/navigation-mysql-with-instance.png b/heatwave-movie-stream/create-bastion-with-python/images/navigation-mysql-with-instance.png
new file mode 100644
index 000000000..66212b8cf
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/navigation-mysql-with-instance.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key-compute-db.png b/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key-compute-db.png
new file mode 100644
index 000000000..7301c3770
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key-compute-db.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key.png b/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key.png
new file mode 100644
index 000000000..5631c3ee3
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/notepad-rsa-key.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/oltp_airport.png b/heatwave-movie-stream/create-bastion-with-python/images/oltp_airport.png
new file mode 100644
index 000000000..41e739dbc
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/oltp_airport.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/pandas-python-install.png b/heatwave-movie-stream/create-bastion-with-python/images/pandas-python-install.png
new file mode 100644
index 000000000..b0f365dbc
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/pandas-python-install.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/python3-install.png b/heatwave-movie-stream/create-bastion-with-python/images/python3-install.png
new file mode 100644
index 000000000..291da9c69
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/python3-install.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/shh-key-list.png b/heatwave-movie-stream/create-bastion-with-python/images/shh-key-list.png
new file mode 100644
index 000000000..7cab02ea6
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/shh-key-list.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/show-ml-data.png b/heatwave-movie-stream/create-bastion-with-python/images/show-ml-data.png
new file mode 100644
index 000000000..077b17826
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/show-ml-data.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display-minimize.png b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display-minimize.png
new file mode 100644
index 000000000..0ce418b59
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display-minimize.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display.png b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display.png
new file mode 100644
index 000000000..33a54b68b
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-display.png differ
diff --git a/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-show.png b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-show.png
new file mode 100644
index 000000000..722879314
Binary files /dev/null and b/heatwave-movie-stream/create-bastion-with-python/images/ssh-key-show.png differ
diff --git a/heatwave-movie-stream/create-db/create-db.md b/heatwave-movie-stream/create-db/create-db.md
new file mode 100644
index 000000000..a2cb55f36
--- /dev/null
+++ b/heatwave-movie-stream/create-db/create-db.md
@@ -0,0 +1,280 @@
+# Create MySQL HeatWave Database System
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+In this lab, you will create and configure a MySQL HeatWave DB System.
+
+_Estimated Time:_ 15 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Create Compartment
+- Create Virtual Cloud Network
+- Create MySQL HeatWave (DB System) Instance
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+
+## Task 1: Create Compartment
+
+1. Click the **Navigation Menu** in the upper left, navigate to **Identity & Security** and select **Compartments**.
+
+2. On the Compartments page, click **Create Compartment**.
+
+3. In the Create Compartment dialog box, complete the following fields:
+
+ Name:
+
+ ```bash
+ movies
+ ```
+
+ Description:
+
+ ```bash
+ Compartment for MovieHub APP powered by MySQL HeatWave Database Service
+ ```
+
+4. The **Parent Compartment** should be your root compartment. Click **Create Compartment**
+ ![VCN](./images/compartment-create.png "create the compartment")
+
+
+## Task 2: Create Virtual Cloud Network
+
+1. You should be signed in to Oracle Cloud!
+
+ Click **Navigation Menu**,
+
+ ![OCI Console Home Page](./images/homepage.png " home page")
+
+2. Click **Networking**, then **Virtual Cloud Networks**
+ ![menu vcn](./images/home-menu-networking-vcn.png "home menu networking vcn ")
+
+ Select the **movies** compartment
+
+3. Select the Click **Start VCN Wizard**
+ ![vcn start wizard](./images/vcn-wizard-menu.png "vcn wizard menu")
+
+4. Select 'Create VCN with Internet Connectivity'
+
+ Click 'Start VCN Wizard'
+ ![vcn wizard start create](./images/vcn-wizard-start.png "start vcn wizard start")
+
+5. Create a VCN with Internet Connectivity
+
+ On Basic Information, complete the following fields:
+
+ VCN Name:
+
+ ```bash
+ HEATWAVE-VCN
+ ```
+
+ Compartment: Select **movies**
+
+ Your screen should look similar to the following
+ ![select compartment](./images/vcn-wizard-compartment.png "select compartment")
+
+6. Click 'Next' at the bottom of the screen
+
+7. Review Oracle Virtual Cloud Network (VCN), Subnets, and Gateways
+
+ Click 'Create' to create the VCN
+ ![create vcn](./images/vcn-wizard-create.png "create vcn")
+
+8. When the Virtual Cloud Network creation completes, click 'View Virtual Cloud Network' to display the created VCN
+ ![vcn creation completing](./images/vcn-wizard-view.png "vcn creation completing")
+
+## Task 3: Configure security list to allow MySQL incoming connections
+
+1. On HEATWAVE-VCN page under 'Subnets in **movies** Compartment', click '**Private Subnet-HEATWAVE-VCN**'
+ ![vcn subnet](./images/vcn-details-subnet.png "vcn details subnet")
+
+2. On Private Subnet-HEATWAVE-VCN page under 'Security Lists', click '**Security List for Private Subnet-HEATWAVE-VCN**'
+ ![vcn private security list](./images/vcn-private-security-list.png "vcn private security list")
+
+3. On Security List for Private Subnet-HEATWAVE-VCN page under 'Ingress Rules', click '**Add Ingress Rules**'
+ ![vcn private subnet](./images/vcn-private-security-list-ingress.png "vcn private security list ingress")
+
+4. On Add Ingress Rules page under Ingress Rule 1
+
+ a. Add an Ingress Rule with Source CIDR
+
+ ```bash
+ 0.0.0.0/0
+ ```
+
+ b. Destination Port Range
+
+ ```bash
+ 3306,33060
+ ```
+
+ c. Description
+
+ ```bash
+ MySQL Port Access
+ ```
+
+ d. Click 'Add Ingress Rule'
+ ![add ingres rule](./images/vcn-private-security-list-ingress-rules-mysql.png "vcn private security list ingress rukes mysql")
+
+5. On Security List for Private Subnet-HEATWAVE-VCN page, the new Ingress Rules will be shown under the Ingress Rules List
+ ![show ingres rule](./images/vcn-private-security-list-ingress-display.png "vcn private security list ingress display")
+
+## Task 4: Configure security list to allow HTTP incoming connections
+
+1. Navigation Menu > Networking > Virtual Cloud Networks
+
+2. Open HEATWAVE-VCN
+
+3. Click public subnet-HEATWAVE-VCN
+
+4. Click Default Security List for HEATWAVE-VCN
+
+5. Click Add Ingress Rules page under Ingress Rule
+
+ Add an Ingress Rule with Source CIDR
+
+ ```bash
+ 0.0.0.0/0
+ ```
+
+ Destination Port Range
+
+ ```bash
+ 80,443
+ ```
+
+ Description
+
+ ```bash
+ Allow HTTP connections
+ ```
+
+6. Click 'Add Ingress Rule'
+
+ ![Add HTTP Ingress Rule](./images/vcn-ttp-add-ingress.png "Add HTTP Ingress Rule")
+
+7. On Security List for Default Security List for HEATWAVE-VCN page, the new Ingress Rules will be shown under the Ingress Rules List
+
+ ![View VCN Completed HTTP Ingress rules](./images/vcn-ttp-ingress-completed.png "View VCN Completed HTTP Ingress rules")
+
+## Task 5: Create MySQL Database for HeatWave (DB System) instance
+
+1. Click on Navigation Menu
+ Databases
+ MySQL
+ ![home menu mysq](./images/home-menu-database-mysql.png "home menu mysql")
+
+2. Click 'Create DB System'
+ ![mysql create button](./images/mysql-menu.png " mysql create button")
+
+3. Create MySQL DB System dialog by completing the fields in each section
+
+ - Provide DB System information
+ - Setup the DB system
+ - Create Administrator credentials
+ - Configure Networking
+ - Configure placement
+ - Configure hardware
+ - Exclude Backups
+ - Set up Advanced Options
+
+4. For DB System Option Select **Development or Testing**
+
+ ![heatwave db option](./images/mysql-create-option-develpment.png "heatwave db option")
+
+5. Provide basic information for the DB System:
+
+ a. Select Compartment **movies**
+
+ b. Enter Name
+
+ ```bash
+ HW-MovieHub
+ ```
+
+ c. Enter Description
+
+ ```bash
+ MySQL HeatWave Database Instance
+ ```
+
+ d. Select **Standalone** and enable **Configure MySQL HeatWave**
+ ![heatwave db info setup](./images/mysql-create-info-setup.png "heatwave db info setup ")
+
+6. Create Administrator Credentials
+
+ **Enter Username** (write username to notepad for later use)
+
+ **Enter Password** (write password to notepad for later use)
+
+ **Confirm Password** (value should match password for later use)
+
+ ![heatwave db admin](./images/mysql-create-admin.png "heatwave db admin ")
+
+7. On Configure networking, keep the default values
+
+ a. Virtual Cloud Network: **HEATWAVE-VCN**
+
+ b. Subnet: **Private Subnet-HEATWAVE-VCN (Regional)**
+
+ c. On Configure placement under 'Availability Domain'
+
+ Select AD-1 ... Do not check 'Choose a Fault Domain' for this DB System.
+
+ ![heatwave db network ad](./images/mysql-create-network-ad.png "heatwave db network ad ")
+
+8. On Configure hardware
+ - a. Click the **Change shape** button to select the **MySQL.HeatWave.VM.Standard** shape.
+ - b. For Data Storage Size (GB) Set value to: **1024**
+
+ ![heatwave db hardware](./images/mysql-create-db-hardware.png "heatwave db hardware ")
+
+9. On Configure Backups, disable 'Enable Automatic Backup'
+
+ ![heatwave db backup](./images/mysql-create-backup.png " heatwave db backup")
+
+10. Click on Show Advanced Options
+
+11. Go to the Connections tab, in the Hostname field enter (same as DB System Name):
+
+ ```bash
+ HW-MovieHub
+ ```
+
+ ![heatwave db advanced](./images/mysql-create-advanced.png "heatwave db advanced ")
+
+12. Review **Create MySQL DB System** Screen
+
+ ![heatwave db create](./images/mysql-create.png "heatwave db create ")
+
+ Click the '**Create**' button
+
+13. The New MySQL DB System will be ready to use after a few minutes
+
+ The state will be shown as 'Creating' during the creation
+ ![show creeation state](./images/mysql-create-in-progress.png "show creeation state")
+
+14. The state 'Active' indicates that the DB System is ready for use
+
+ ![show active state](./images/mysql-detail-active.png "show active state")
+
+15. On HEATWAVE-HW Page, select the **Connections** tab and save the MySQL Endpoint (Private IP Address) to notepad for use later.
+
+ ![heatwave endpoint](./images/mysql-detail-endpoint.png "heatwave endpoint")
+
+You may now **proceed to the next lab**
+
+## Acknowledgements
+
+- **Author** - Perside Foster, MySQL Principal Solution Engineering
+- **Contributors** - Mandy Pang, MySQL Principal Product Manager, Nick Mader, MySQL Global Channel Enablement & Strategy Manager
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
\ No newline at end of file
diff --git a/heatwave-movie-stream/create-db/images/compartment-create.png b/heatwave-movie-stream/create-db/images/compartment-create.png
new file mode 100644
index 000000000..d6d3e0e33
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/compartment-create.png differ
diff --git a/heatwave-movie-stream/create-db/images/home-menu-database-mysql.png b/heatwave-movie-stream/create-db/images/home-menu-database-mysql.png
new file mode 100644
index 000000000..56a4cbf99
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/home-menu-database-mysql.png differ
diff --git a/heatwave-movie-stream/create-db/images/home-menu-networking-vcn.png b/heatwave-movie-stream/create-db/images/home-menu-networking-vcn.png
new file mode 100644
index 000000000..fd0f97a7d
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/home-menu-networking-vcn.png differ
diff --git a/heatwave-movie-stream/create-db/images/homepage.png b/heatwave-movie-stream/create-db/images/homepage.png
new file mode 100644
index 000000000..ed605fe19
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/homepage.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-admin.png b/heatwave-movie-stream/create-db/images/mysql-create-admin.png
new file mode 100644
index 000000000..7c02e570e
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-admin.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-advanced.png b/heatwave-movie-stream/create-db/images/mysql-create-advanced.png
new file mode 100644
index 000000000..1c65cd823
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-advanced.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-backup.png b/heatwave-movie-stream/create-db/images/mysql-create-backup.png
new file mode 100644
index 000000000..369948421
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-backup.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-db-hardware.png b/heatwave-movie-stream/create-db/images/mysql-create-db-hardware.png
new file mode 100644
index 000000000..7709b02cf
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-db-hardware.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-in-progress.png b/heatwave-movie-stream/create-db/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..5ba73c1f9
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-info-setup.png b/heatwave-movie-stream/create-db/images/mysql-create-info-setup.png
new file mode 100644
index 000000000..80e48395f
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-info-setup.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-network-ad.png b/heatwave-movie-stream/create-db/images/mysql-create-network-ad.png
new file mode 100644
index 000000000..22a65ec14
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-network-ad.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create-option-develpment.png b/heatwave-movie-stream/create-db/images/mysql-create-option-develpment.png
new file mode 100644
index 000000000..d361a4c66
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create-option-develpment.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-create.png b/heatwave-movie-stream/create-db/images/mysql-create.png
new file mode 100644
index 000000000..749fa52bf
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-create.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-detail-active.png b/heatwave-movie-stream/create-db/images/mysql-detail-active.png
new file mode 100644
index 000000000..92fee9ea3
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-detail-endpoint.png b/heatwave-movie-stream/create-db/images/mysql-detail-endpoint.png
new file mode 100644
index 000000000..4533dd0fe
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-detail-endpoint.png differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/create-db/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/create-db/images/mysql-menu.png b/heatwave-movie-stream/create-db/images/mysql-menu.png
new file mode 100644
index 000000000..1da2bd1d8
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/mysql-menu.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-details-subnet.png b/heatwave-movie-stream/create-db/images/vcn-details-subnet.png
new file mode 100644
index 000000000..969912bac
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-details-subnet.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-menu-compartmen-turbo.png b/heatwave-movie-stream/create-db/images/vcn-menu-compartmen-turbo.png
new file mode 100644
index 000000000..6ca34cd34
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-menu-compartmen-turbo.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-display.png b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-display.png
new file mode 100644
index 000000000..d5b3fef83
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-display.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-rules-mysql.png b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-rules-mysql.png
new file mode 100644
index 000000000..069ff7833
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress-rules-mysql.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress.png b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress.png
new file mode 100644
index 000000000..0eae8ff0e
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-private-security-list-ingress.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-private-security-list.png b/heatwave-movie-stream/create-db/images/vcn-private-security-list.png
new file mode 100644
index 000000000..c07a28de4
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-private-security-list.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-ttp-add-ingress.png b/heatwave-movie-stream/create-db/images/vcn-ttp-add-ingress.png
new file mode 100644
index 000000000..e8b331ed2
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-ttp-add-ingress.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-ttp-ingress-completed.png b/heatwave-movie-stream/create-db/images/vcn-ttp-ingress-completed.png
new file mode 100644
index 000000000..ae4400bc4
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-ttp-ingress-completed.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-wizard-compartment.png b/heatwave-movie-stream/create-db/images/vcn-wizard-compartment.png
new file mode 100644
index 000000000..fe7ce15c6
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-wizard-compartment.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-wizard-create.png b/heatwave-movie-stream/create-db/images/vcn-wizard-create.png
new file mode 100644
index 000000000..276ddd660
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-wizard-create.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-wizard-menu.png b/heatwave-movie-stream/create-db/images/vcn-wizard-menu.png
new file mode 100644
index 000000000..f738965d3
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-wizard-menu.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-wizard-start.png b/heatwave-movie-stream/create-db/images/vcn-wizard-start.png
new file mode 100644
index 000000000..3f451c74f
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-wizard-start.png differ
diff --git a/heatwave-movie-stream/create-db/images/vcn-wizard-view.png b/heatwave-movie-stream/create-db/images/vcn-wizard-view.png
new file mode 100644
index 000000000..1014cbc93
Binary files /dev/null and b/heatwave-movie-stream/create-db/images/vcn-wizard-view.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/create-movie-tables.md b/heatwave-movie-stream/create-movie-tables/create-movie-tables.md
new file mode 100644
index 000000000..9ac645016
--- /dev/null
+++ b/heatwave-movie-stream/create-movie-tables/create-movie-tables.md
@@ -0,0 +1,332 @@
+# Create the base Movies Database Tables for the Movie App
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+In this lab, you will create the additional tables needed to train the MySQL HeatWave AutoML models and the tables needed to generate predictions with **ML\_PREDICT\_TABLE AutoMl function** using these Machine Learning models. These tables will also allow Oracle APEX to consume the data easily with RESTful Services for MySQL HeatWave Database Service.
+
+_Estimated Time:_ 20 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Create the supporting tables for the predictions tables
+- Create the predictions tables
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Completed Lab 6
+
+## Task 1: Connect with MySQL Shell:
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. On the command line, connect to MySQL using the MySQL Shell client tool with the following command:
+
+ ```bash
+ mysqlsh -uadmin -p -h 10.... -P3306 --sql
+ ```
+
+ ![Connect](./images/heatwave-load-shell.png "heatwave-load-shell ")
+
+## Task 2: Create the supporting tables to generate the predictions for two selected users
+
+1. Create the supporting tables to generate the USER-ITEM predictions with the trained models:
+
+ Enter the following command at the prompt
+ a. Make sure you are in the movies schema
+
+ ```bash
+ USE movies;
+ ```
+
+ b. Create the supporting tables. Three tables for the users '20', '21' and a 'new' user will be created.
+
+ Enter the following command at the prompt. **Click on Reveal code block**
+
+
+ **_Reveal code block_**
+ ```bash
+
+ CREATE TABLE user_20_0r AS
+ SELECT
+ CAST(20 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data0 d
+ ON
+ d.user_id = 20
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_21_0r AS
+ SELECT
+ CAST(21 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data0 d
+ ON
+ d.user_id = 21
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_new_0r AS SELECT CAST(1000 AS CHAR(5)) AS user_id, CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i;
+
+ CREATE TABLE user_20_15r AS
+ SELECT
+ CAST(20 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data1 d
+ ON
+ d.user_id = 20
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_21_15r AS
+ SELECT
+ CAST(21 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data1 d
+ ON
+ d.user_id = 21
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_new_15r AS SELECT CAST(1000 AS CHAR(5)) AS user_id, CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i;
+
+ CREATE TABLE user_20_30r AS
+ SELECT
+ CAST(20 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data2 d
+ ON
+ d.user_id = 20
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_21_30r AS
+ SELECT
+ CAST(21 AS CHAR(5)) AS user_id,
+ CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i
+ LEFT JOIN data2 d
+ ON
+ d.user_id = 21
+ AND d.item_id = i.item_id
+ WHERE d.user_id IS NULL;
+
+ CREATE TABLE user_new_30r AS SELECT CAST(1000 AS CHAR(5)) AS user_id, CAST(i.item_id AS CHAR(7)) AS item_id
+ FROM item i;
+
+ ```
+
+
+ c. Hit **ENTER** to execute the last command
+
+ d. Notice the difference in the number of created records for the tables
+
+ The number of records for every user table will vary between each other. This is because we are only getting the items from the user that the models WERE NOT trained with.
+
+ This is exactly the same that the function ML\_PREDICT\_ROW does. In this case we are generating the results in tables to be easily consumed by the ORACLE APEX APP.
+
+ ![user supporting tables row counts](./images/user-supporting-tables-row-counts.png "user-supporting-tables-row-counts ")
+
+ e. Show the tables generated up until now
+
+ ```bash
+ USE movies;
+ SHOW TABLES;
+
+ ```
+
+ ![user supporting tables list](./images/user-supporting-tables-list.png "user-supporting-tables-list ")
+
+## Task 3: Create the supporting tables to generate the predictions for two selected items
+
+1. Create the supporting tables to generate the ITEM-USER predictions with the trained models:
+
+ ```bash
+ CREATE TABLE item_200 AS
+ SELECT
+ CAST(200 AS CHAR(7)) AS item_id,
+ CAST(i.user_id AS CHAR(5)) AS user_id
+ FROM
+ user i
+ LEFT JOIN
+ data0 d
+ ON
+ d.item_id = 200
+ AND d.user_id = i.user_id
+ WHERE
+ d.item_id IS NULL;
+
+ CREATE TABLE item_453 AS
+ SELECT
+ CAST(453 AS CHAR(7)) AS item_id,
+ CAST(i.user_id AS CHAR(5)) AS user_id
+ FROM
+ user i
+ LEFT JOIN
+ data0 d
+ ON
+ d.item_id = 453
+ AND d.user_id = i.user_id
+ WHERE
+ d.item_id IS NULL;
+ ```
+
+## Task 4: Generate the USER-ITEM and ITEM-USER Prediction Tables
+
+1. Load to memory the trained ML models if they are not loaded already
+
+ a. Set the model handle variables for every model
+
+ ```bash
+
+ SET @movies_model_1=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 2);
+
+ SET @movies_model_2=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 1);
+
+ SET @movies_model_3=(SELECT model_handle FROM ML_SCHEMA_admin.MODEL_CATALOG ORDER BY model_id DESC LIMIT 1 OFFSET 0);
+
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ c. Load every model in memory before using them
+
+ ```bash
+
+ CALL sys.ML_MODEL_LOAD(@movies_model_1, NULL);
+ CALL sys.ML_MODEL_LOAD(@movies_model_2, NULL);
+ CALL sys.ML_MODEL_LOAD(@movies_model_3, NULL);
+ ```
+
+ d. Hit **ENTER** to execute the last command
+
+2. Generate the USER-ITEM table predictions with the trained models:
+
+ a. Use the function ML\_PREDICT\_TABLE to generate the USER-ITEM tables.
+
+
+ ```bash
+
+ call sys.ML_PREDICT_TABLE('movies.user_20_0r',@movies_model_1,'movies.pred_user_20_0r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_20_15r',@movies_model_2,'movies.pred_user_20_15r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_20_30r',@movies_model_3,'movies.pred_user_20_30r',NULL);
+ ```
+
+ Hit **ENTER** to execute the last command
+
+ ```bash
+
+ call sys.ML_PREDICT_TABLE('movies.user_21_0r',@movies_model_1,'movies.pred_user_21_0r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_21_15r',@movies_model_2,'movies.pred_user_21_15r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_21_30r',@movies_model_3,'movies.pred_user_21_30r',NULL);
+ ```
+
+ Hit **ENTER** to execute the last command
+
+ ```bash
+
+ call sys.ML_PREDICT_TABLE('movies.user_new_0r',@movies_model_1,'movies.pred_user_new_0r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_new_15r',@movies_model_2,'movies.pred_user_new_15r',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.user_new_30r',@movies_model_3,'movies.pred_user_new_30r',NULL);
+ ```
+
+ Hit **ENTER** to execute the last command
+
+ b. Show the tables generated up until now
+
+ ```bash
+ USE movies;
+ SHOW TABLES;
+
+ ```
+
+ ![user prediction tables list](./images/user-prediction-tables-list.png "user-prediction-tables-list ")
+
+3. Generate the ITEM-USER table predictions with the trained models:
+
+ a.
+
+ ```bash
+
+ call sys.ML_PREDICT_TABLE('movies.item_200',@movies_model_1,'movies.pred_item_200',NULL);
+
+ call sys.ML_PREDICT_TABLE('movies.item_453',@movies_model_1,'movies.pred_item_453',NULL);
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+## Task 5: Create the supporting media tables
+
+1. Create the item media table:
+
+ ```bash
+ CREATE TABLE item_media AS SELECT item_id AS `image_id`, title AS `mov_title` FROM item;
+ ALTER TABLE item_media ADD COLUMN url_down varchar(255) DEFAULT NULL, ADD COLUMN legend varchar(40) DEFAULT NULL, MODIFY image_id int NOT NULL, ADD PRIMARY KEY (image_id);
+ ```
+
+2. Create the profiles media table:
+
+ ```bash
+ CREATE TABLE `profiles` (
+ `user` varchar(10) DEFAULT NULL,
+ `name` varchar(20) DEFAULT NULL,
+ `media` varchar(255) DEFAULT NULL,
+ `legend` varchar(40) NOT NULL DEFAULT ' ');
+ ```
+
+ ```bash
+ INSERT INTO profiles (user,name,media,legend)
+ VALUES
+ (21,'James',' ',' '),
+ (20,'Lisa',' ',' '),
+ (600,'Thomas',' ',' '),
+ (165,'Marie',' ',' ');
+
+ ```
+
+## Task 6: Reload the movie database to HeatWave Cluster
+
+1. Load the movie tables into the HeatWave cluster memory:
+
+ ```bash
+ CALL sys.heatwave_load(JSON_ARRAY('movies'), NULL);
+ ```
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.oracle.com/en-us/iaas/mysql-database/index.html)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/mys-hwaml-machine-learning.html)
+
+
+## Acknowledgements
+
+- **Author** - Perside Foster, MySQL Principal Solution Engineering
+- **Contributors** - Mandy Pang, MySQL Principal Product Manager, Nick Mader, MySQL Global Channel Enablement & Strategy Manager
+- **Last Updated By/Date** - Perside Foster, MySQL Solution Engineering, August 2023
diff --git a/heatwave-movie-stream/create-movie-tables/images/compartment-create.png b/heatwave-movie-stream/create-movie-tables/images/compartment-create.png
new file mode 100644
index 000000000..dbf4b82e5
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/compartment-create.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/heatwave-load-shell.png b/heatwave-movie-stream/create-movie-tables/images/heatwave-load-shell.png
new file mode 100644
index 000000000..04d2ac581
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/home-menu-database-mysql.png b/heatwave-movie-stream/create-movie-tables/images/home-menu-database-mysql.png
new file mode 100644
index 000000000..56a4cbf99
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/home-menu-database-mysql.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/home-menu-networking-vcn.png b/heatwave-movie-stream/create-movie-tables/images/home-menu-networking-vcn.png
new file mode 100644
index 000000000..fd0f97a7d
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/home-menu-networking-vcn.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/homepage.png b/heatwave-movie-stream/create-movie-tables/images/homepage.png
new file mode 100644
index 000000000..ed605fe19
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/homepage.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-admin.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-admin.png
new file mode 100644
index 000000000..7c02e570e
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-admin.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-advanced.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-advanced.png
new file mode 100644
index 000000000..f095e37bc
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-advanced.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-backup.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-backup.png
new file mode 100644
index 000000000..369948421
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-backup.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-db-hardware.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-db-hardware.png
new file mode 100644
index 000000000..7709b02cf
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-db-hardware.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.pn.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.pn.png
new file mode 100644
index 000000000..eea116fdd
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.pn.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..6fee5da87
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-info-setup.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-info-setup.png
new file mode 100644
index 000000000..80e48395f
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-info-setup.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-network-ad.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-network-ad.png
new file mode 100644
index 000000000..deaabbc9a
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-network-ad.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create-option-develpment.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create-option-develpment.png
new file mode 100644
index 000000000..d361a4c66
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create-option-develpment.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-create.png b/heatwave-movie-stream/create-movie-tables/images/mysql-create.png
new file mode 100644
index 000000000..ea89e355a
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-create.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-detail-active.png b/heatwave-movie-stream/create-movie-tables/images/mysql-detail-active.png
new file mode 100644
index 000000000..dab9bc249
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-detail-endpoint.png b/heatwave-movie-stream/create-movie-tables/images/mysql-detail-endpoint.png
new file mode 100644
index 000000000..1db6bebdb
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-detail-endpoint.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/create-movie-tables/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/mysql-menu.png b/heatwave-movie-stream/create-movie-tables/images/mysql-menu.png
new file mode 100644
index 000000000..f024ebe47
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/mysql-menu.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/user-prediction-tables-list.png b/heatwave-movie-stream/create-movie-tables/images/user-prediction-tables-list.png
new file mode 100644
index 000000000..7b55e1483
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/user-prediction-tables-list.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-list.png b/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-list.png
new file mode 100644
index 000000000..1b98e1dd6
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-list.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-row-counts.png b/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-row-counts.png
new file mode 100644
index 000000000..7e8cd2bc9
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/user-supporting-tables-row-counts.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-details-subnet.png b/heatwave-movie-stream/create-movie-tables/images/vcn-details-subnet.png
new file mode 100644
index 000000000..00867d576
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-details-subnet.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-menu-compartmen-turbo.png b/heatwave-movie-stream/create-movie-tables/images/vcn-menu-compartmen-turbo.png
new file mode 100644
index 000000000..6ca34cd34
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-menu-compartmen-turbo.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-display.png b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-display.png
new file mode 100644
index 000000000..d5b3fef83
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-display.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-rules-mysql.png b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-rules-mysql.png
new file mode 100644
index 000000000..069ff7833
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress-rules-mysql.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress.png b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress.png
new file mode 100644
index 000000000..0eae8ff0e
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list-ingress.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list.png b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list.png
new file mode 100644
index 000000000..c07a28de4
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-private-security-list.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-add-ingress.png b/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-add-ingress.png
new file mode 100644
index 000000000..e8b331ed2
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-add-ingress.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-ingress-completed.png b/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-ingress-completed.png
new file mode 100644
index 000000000..ae4400bc4
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-ttp-ingress-completed.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-compartment.png b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-compartment.png
new file mode 100644
index 000000000..3c687d399
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-compartment.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-create.png b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-create.png
new file mode 100644
index 000000000..7ada18b54
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-create.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-menu.png b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-menu.png
new file mode 100644
index 000000000..0b6443585
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-menu.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-start.png b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-start.png
new file mode 100644
index 000000000..ca88b2565
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-start.png differ
diff --git a/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-view.png b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-view.png
new file mode 100644
index 000000000..1014cbc93
Binary files /dev/null and b/heatwave-movie-stream/create-movie-tables/images/vcn-wizard-view.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/develop-moviehub-apex-app.md b/heatwave-movie-stream/develop-moviehub-apex-app/develop-moviehub-apex-app.md
new file mode 100644
index 000000000..c00920c40
--- /dev/null
+++ b/heatwave-movie-stream/develop-moviehub-apex-app/develop-moviehub-apex-app.md
@@ -0,0 +1,171 @@
+# Develop the MovieHub - Movie Recommendation App
+
+![MovieHub - Powered by MySQL Heatwave](./images/moviehub-logo-large.png "moviehub-logo-large ")
+
+## Introduction
+
+The MovieHub App is a demo application created to showcase the potential of MySQL HeatWave powered applications.
+
+In this lab, you will be guided to create high performance apps powered by the MySQL HeatWave Database Service; developing a movie stream like web application using Oracle APEX, a leading low-code development tool that allows you to create complex web apps in minutes. You will also learn how you can leverage the automation of machine learning processes, thanks to MySQL AutoML that allows you to build, train, deploy, and explain machine learning models within MySQL HeatWave.
+
+_Estimated Time:_ 15 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following task:
+
+- Running the MovieHub demo application powered by MySQL
+- Explore the users movies recommendation pages
+- Use the Administration Views page
+- Explore the Analytics Dashboard page
+- Explore the Holiday Movie page
+
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Some Experience with Oracle Autonomous and Oracle APEX
+- Completed Lab 10
+
+## Task 1: Run the MovieHub App
+
+1. Login into to your Oracle APEX workspace
+
+ ![APEX workspace menu with app](./images/apex-workspace-moviehub-menu.png "apex-workspace-moviehub-menu ")
+
+ You should see the imported application
+
+2. Run and log into the imported app
+
+ a. Click on **Run**
+
+ A window will open in the web browser with the application home page
+
+ ![MovieHub Home page](./images/moviehub-app-home-page.png "moviehub-app-home-page ")
+
+ Notice that the nobody user will be the default value when you are not currently logged as your APEX account
+
+ c. Click on **Go To User Login Page**
+
+ The application login page will appear
+
+ ![MovieHub Log In page](./images/apex-app-login-page.png =50%x* "apex-app-login-page ")
+
+ d. Introduce your the user credentials of the 'public' account. This will simulate what happens when a not administrator user logs in the MovieHub App
+
+ ![MovieHub PUBLIC user log in](./images/public-user-login-page.png =50%x* "public-user-login-page ")
+
+## Task 2: Explore the users movies recommendation pages
+
+1. Explore the Profiles page
+
+ a. The **My Profiles** page will open, with the current account profiles in the app
+
+ ![MovieHub Profiles page](./images/moviehub-profiles-page.png "moviehub-profiles-page ")
+
+ b. At any time, You can log out. This action will return you to the home page
+
+ c. When logged as the public account, only the Profiles page will appear in the Side Tree Navigation Menu
+
+ ![MovieHub Side Tree Navigation Menu](./images/side-tree-navigation-menu.png =50%x* "side-tree-navigation-menu ")
+
+2. See the movie recommendations for the user James
+
+ a. Go to the profiles page
+
+ ![MovieHub Profiles page](./images/moviehub-profiles-page2.png "moviehub-profiles-page ")
+
+ b. Click the button below James profile
+
+ ![James profile button](./images/moviehub-user1-button.png =30%x* "moviehub-user1-button ")
+
+ c. The James movies recommendation page will appear
+
+ ![James recommendation page](./images/recommendations-user1-page.png "recommendations-user1-page ")
+
+ The page will have the top 5 recommended movies, according to the "**pred\_user\_21\_0r**" MySQL table. This page is loaded with the **Restore** button as well
+
+3. Explore the movie recommendations when you add more movie records to the data with "Watch movies" buttons. **This simulates the action of watching 15 and 30 movies from the movie catalog compared with the original data**
+
+ a. Click on **Watch 15 movies**
+
+ ![James recommendation page plus 15](./images/recommendations-user1-plus15.png "recommendations-user1-plus15 ")
+
+ Notice the movie recommendation change. This action will show the top 5 recommended movies, according to the "**pred\_user\_21\_15r**" MySQL table
+
+ b. Click on **Watch 30 movies**
+
+ ![James recommendation page plus 30](./images/recommendations-user1-plus30.png "recommendations-user1-plus30 ")
+
+ Notice the movie recommendation change. This action will show the top 5 recommended movies, according to the "**pred\_user\_21\_30r**" MySQL table
+
+4. Explore the popular movies recommendations
+
+ The application allows you to simulate what would happen if the user has inactivity for more than 30 days. This will trigger the global recommendations that are the same as a **new user**.
+
+ a. Click on the Date Picker Item
+
+ ![Date Picker item selector](./images/date-picker.png =40%x* "date-picker ")
+
+ b. Select **30 days after** today's date. Or Select the **next month**
+
+ ![Inactivity Popular Movies Recommendations](./images/recommendations-popular-movies.png "recommendations-popular-movies ")
+
+ Notice the movie recommendation change. This action will show the top 5 recommended movies, according to the "**pred\_user\_30\_30r**" MySQL table
+
+## Task 3: Explore the Analytics Dashboard page
+
+1. **Log Out** from the 'public' account
+
+ ![Sign Out from public account](./images/sign-out-public.png =30%x* "sign-out-public ")
+
+2. **Log In** as an 'admin' account
+
+ ![Sign In as admin account](./images/sing-in-admin.png =30%x* "sing-in-admin ")
+
+3. When logged in as an administrative account, the Home Page will be the **Admin Views**
+
+ ![Administration Views Page](./images/administration-views.png "administration-views ")
+
+4. Click in the Analytics Dashboard button to access the **Analytics Dashboard** page. You can also access this pages by the Navigation Menu
+
+ ![Analytics Dashboard Page](./images/analytics-dashboard-page.png "analytics-dashboard-page ")
+
+ You can see:
+
+ a. **Movies - Genres Distribution** Pie chart
+
+ b. **User - Gender Distribution** Donut chart
+
+ c. **Users - Age Distribution** Bar chart
+
+ d. **Top 10 Trending Movies** Bar chart
+
+## Task 4: Explore the Holiday Movie pages
+
+1. Click in the **Holiday Movie** Navigation Menu button to access the **Holiday Movie** page
+
+ a.
+
+ ![Holiday Movie navigation menu](./images/navigation-menu-holiday-movie.png =60%x* "navigation-menu-holiday-movie ")
+
+ b. The page will have the top 10 recommended users for the Movie 200 - 'The Shinning', according to the "**pred\_item\_200**" MySQL table
+
+ ![Item 200 recommendation page](./images/recommendations-item-200-page.png "recommendations-item-200-page ")
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Autonomous Database Serverless Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/index.html#Oracle%C2%AE-Cloud)
+- [Oracle APEX Rendering Objects Documentation](https://docs.oracle.com/en/database/oracle/apex/23.1/aexjs/apex.html)
+- [Oracle JavaScript Extension Toolkit (JET) API Reference Documentation](https://www.oracle.com/webfolder/technetwork/jet/jsdocs/index.html)
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.oracle.com/en-us/iaas/mysql-database/index.html)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/mys-hwaml-machine-learning.html)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdelete.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdelete.png
new file mode 100644
index 000000000..63e42df36
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdelete.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdetail.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdetail.png
new file mode 100644
index 000000000..ca3769ec2
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbdetail.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbmoreactions.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbmoreactions.png
new file mode 100644
index 000000000..d2387ebe8
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/12dbmoreactions.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/12main.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/12main.png
new file mode 100644
index 000000000..b4a1fb01e
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/12main.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/admin-user-login-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/admin-user-login-page.png
new file mode 100644
index 000000000..9a6997adb
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/admin-user-login-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views-page.png
new file mode 100644
index 000000000..6b36a7133
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views.png
new file mode 100644
index 000000000..800c0fa6f
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/administration-views.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/analytics-dashboard-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/analytics-dashboard-page.png
new file mode 100644
index 000000000..643491d3f
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/analytics-dashboard-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-app-login-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-app-login-page.png
new file mode 100644
index 000000000..82163685a
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-app-login-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-workspace-moviehub-menu.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-workspace-moviehub-menu.png
new file mode 100644
index 000000000..422869395
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/apex-workspace-moviehub-menu.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page.png
new file mode 100644
index 000000000..e6b55aaf1
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page2.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page2.png
new file mode 100644
index 000000000..27de87706
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-create-user2-page2.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/app-myprofiles-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-myprofiles-page.png
new file mode 100644
index 000000000..30a0e989f
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/app-myprofiles-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/data0-table-description.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/data0-table-description.png
new file mode 100644
index 000000000..de6a99de1
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/data0-table-description.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/date-picker.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/date-picker.png
new file mode 100644
index 000000000..a0110ae33
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/date-picker.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/define-dynamic-actions.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-dynamic-actions.png
new file mode 100644
index 000000000..ada6f9aed
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-dynamic-actions.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-actions.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-actions.png
new file mode 100644
index 000000000..45253f4b5
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-actions.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-execution-actions.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-execution-actions.png
new file mode 100644
index 000000000..60a5fc6f7
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/define-true-execution-actions.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/duplicate-card-region.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/duplicate-card-region.png
new file mode 100644
index 000000000..8d6222818
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/duplicate-card-region.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/dynamic-action-tab.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/dynamic-action-tab.png
new file mode 100644
index 000000000..9d3a70d4b
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/dynamic-action-tab.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-apex-app.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-apex-app.png
new file mode 100644
index 000000000..670884dd9
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-apex-app.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-myprofiles-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-myprofiles-page.png
new file mode 100644
index 000000000..09f60bf2e
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/edit-myprofiles-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/heatwave-load-shell.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/heatwave-load-shell.png
new file mode 100644
index 000000000..04d2ac581
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-build-out.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-build-out.png
new file mode 100644
index 000000000..3b7930d89
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-build-out.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data-execute.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data-execute.png
new file mode 100644
index 000000000..2690219a0
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data-execute.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data.png
new file mode 100644
index 000000000..536e1dccb
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-data.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-out.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-out.png
new file mode 100644
index 000000000..680834593
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-out.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-table-out.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-table-out.png
new file mode 100644
index 000000000..612cfd79d
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-predict-table-out.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-score-model-out.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-score-model-out.png
new file mode 100644
index 000000000..592ac043e
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/iris-ml-score-model-out.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-profiles-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-profiles-page.png
new file mode 100644
index 000000000..2ddc58993
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-profiles-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-recommendations-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-recommendations-page.png
new file mode 100644
index 000000000..717a45fe4
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/lisa-user-recommendations-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-model1-predict-row-user600.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-model1-predict-row-user600.png
new file mode 100644
index 000000000..061b66111
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-model1-predict-row-user600.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-models3-predict-row-user600.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-models3-predict-row-user600.png
new file mode 100644
index 000000000..102605759
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/ml-models3-predict-row-user600.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/model-trained-model-catalog-1.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/model-trained-model-catalog-1.png
new file mode 100644
index 000000000..2e3441ed4
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/model-trained-model-catalog-1.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/models-trained-model-catalog-3.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/models-trained-model-catalog-3.png
new file mode 100644
index 000000000..127a95c05
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/models-trained-model-catalog-3.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-app-home-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-app-home-page.png
new file mode 100644
index 000000000..339e85730
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-app-home-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-logo-large.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-logo-large.png
new file mode 100644
index 000000000..baf3c223b
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-logo-large.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page.png
new file mode 100644
index 000000000..83286611b
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page2.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page2.png
new file mode 100644
index 000000000..d2562a159
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-profiles-page2.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-user1-button.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-user1-button.png
new file mode 100644
index 000000000..eee896f21
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/moviehub-user1-button.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/develop-moviehub-apex-app/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/navigation-menu-holiday-movie.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/navigation-menu-holiday-movie.png
new file mode 100644
index 000000000..b56f77251
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/navigation-menu-holiday-movie.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/new-region-tabs.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/new-region-tabs.png
new file mode 100644
index 000000000..897dc76bb
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/new-region-tabs.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/public-user-login-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/public-user-login-page.png
new file mode 100644
index 000000000..78676f192
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/public-user-login-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-item-200-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-item-200-page.png
new file mode 100644
index 000000000..3dc9242c6
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-item-200-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-popular-movies.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-popular-movies.png
new file mode 100644
index 000000000..306400aa2
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-popular-movies.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-page.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-page.png
new file mode 100644
index 000000000..445618137
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-page.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus15.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus15.png
new file mode 100644
index 000000000..96ec94a6d
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus15.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus30.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus30.png
new file mode 100644
index 000000000..4547c2e1f
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/recommendations-user1-plus30.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/region-options-attributes.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/region-options-attributes.png
new file mode 100644
index 000000000..366c53dd1
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/region-options-attributes.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/show-ml-data.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/show-ml-data.png
new file mode 100644
index 000000000..077b17826
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/show-ml-data.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/side-tree-navigation-menu.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/side-tree-navigation-menu.png
new file mode 100644
index 000000000..571c0ce4c
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/side-tree-navigation-menu.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/sign-out-public.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/sign-out-public.png
new file mode 100644
index 000000000..5c53e7fa8
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/sign-out-public.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/sing-in-admin.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/sing-in-admin.png
new file mode 100644
index 000000000..c24a02cff
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/sing-in-admin.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-create-parent-region.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-create-parent-region.png
new file mode 100644
index 000000000..94ccb35fb
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-create-parent-region.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-lisa-region.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-lisa-region.png
new file mode 100644
index 000000000..167834a79
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-lisa-region.png differ
diff --git a/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-regions-item-buttons.png b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-regions-item-buttons.png
new file mode 100644
index 000000000..f87274c6b
Binary files /dev/null and b/heatwave-movie-stream/develop-moviehub-apex-app/images/user2-regions-item-buttons.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/airport_web.png b/heatwave-movie-stream/improve-app-hw/images/airport_web.png
new file mode 100644
index 000000000..fbf68e06d
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/airport_web.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/airportdb-list.png b/heatwave-movie-stream/improve-app-hw/images/airportdb-list.png
new file mode 100644
index 000000000..3a5c136e1
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/airportdb-list.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/apex-workspace-moviehub-menu.png b/heatwave-movie-stream/improve-app-hw/images/apex-workspace-moviehub-menu.png
new file mode 100644
index 000000000..422869395
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/apex-workspace-moviehub-menu.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/architecture-oac-heatwave.png b/heatwave-movie-stream/improve-app-hw/images/architecture-oac-heatwave.png
new file mode 100644
index 000000000..6bd011f25
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/architecture-oac-heatwave.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open-large.png b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open-large.png
new file mode 100644
index 000000000..f257c72f1
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open-large.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open.png b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open.png
new file mode 100644
index 000000000..39bbdb27b
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-open.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloud-shell-setup.png b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-setup.png
new file mode 100644
index 000000000..02fb91563
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloud-shell-setup.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloud-shell.png b/heatwave-movie-stream/improve-app-hw/images/cloud-shell.png
new file mode 100644
index 000000000..07660235e
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloud-shell.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloudshell-buckets.png b/heatwave-movie-stream/improve-app-hw/images/cloudshell-buckets.png
new file mode 100644
index 000000000..59f8764cc
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloudshell-buckets.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/cloudshell-main.png b/heatwave-movie-stream/improve-app-hw/images/cloudshell-main.png
new file mode 100644
index 000000000..ffd53d616
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/cloudshell-main.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-active.png b/heatwave-movie-stream/improve-app-hw/images/compute-active.png
new file mode 100644
index 000000000..91b67ccb1
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-active.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-add-ssh-key.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-add-ssh-key.png
new file mode 100644
index 000000000..dcf34fd6f
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-add-ssh-key.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-boot-volume.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-boot-volume.png
new file mode 100644
index 000000000..1203f029e
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-boot-volume.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-change-shape.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-change-shape.png
new file mode 100644
index 000000000..f0370b00b
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-change-shape.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-image.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-image.png
new file mode 100644
index 000000000..94e323a25
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-image.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-networking-select.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-networking-select.png
new file mode 100644
index 000000000..673dbac2e
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-networking-select.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-networking.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-networking.png
new file mode 100644
index 000000000..96441f4f8
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-networking.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-security.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-security.png
new file mode 100644
index 000000000..1ad488ff6
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-security.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-create-select-shape.png b/heatwave-movie-stream/improve-app-hw/images/compute-create-select-shape.png
new file mode 100644
index 000000000..0aff3e9da
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-create-select-shape.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-menu-create-instance.png b/heatwave-movie-stream/improve-app-hw/images/compute-menu-create-instance.png
new file mode 100644
index 000000000..533124029
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-menu-create-instance.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/compute-provisioning.png b/heatwave-movie-stream/improve-app-hw/images/compute-provisioning.png
new file mode 100644
index 000000000..8f7988049
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/compute-provisioning.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/connect-first-signin.png b/heatwave-movie-stream/improve-app-hw/images/connect-first-signin.png
new file mode 100644
index 000000000..4af5b71ca
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/connect-first-signin.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/create-bucket.png b/heatwave-movie-stream/improve-app-hw/images/create-bucket.png
new file mode 100644
index 000000000..2bda62094
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/create-bucket.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests-detail.png b/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests-detail.png
new file mode 100644
index 000000000..1d0801cbd
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests-detail.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests.png b/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests.png
new file mode 100644
index 000000000..cde9012f2
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/create-pre-authenticated-requests.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/dbchart-copied.png b/heatwave-movie-stream/improve-app-hw/images/dbchart-copied.png
new file mode 100644
index 000000000..a6633eaf4
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/dbchart-copied.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/dbchart-open.png b/heatwave-movie-stream/improve-app-hw/images/dbchart-open.png
new file mode 100644
index 000000000..9546eeffb
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/dbchart-open.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/dbchart-select-all.png b/heatwave-movie-stream/improve-app-hw/images/dbchart-select-all.png
new file mode 100644
index 000000000..5c3bc4b5d
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/dbchart-select-all.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/heatwave-load-shell.png b/heatwave-movie-stream/improve-app-hw/images/heatwave-load-shell.png
new file mode 100644
index 000000000..33d2758bf
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-build-out.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-build-out.png
new file mode 100644
index 000000000..3b7930d89
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-build-out.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-data-execute.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-data-execute.png
new file mode 100644
index 000000000..2690219a0
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-data-execute.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-data.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-data.png
new file mode 100644
index 000000000..536e1dccb
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-data.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-out.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-out.png
new file mode 100644
index 000000000..680834593
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-out.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-table-out.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-table-out.png
new file mode 100644
index 000000000..612cfd79d
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-predict-table-out.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-ml-score-model-out.png b/heatwave-movie-stream/improve-app-hw/images/iris-ml-score-model-out.png
new file mode 100644
index 000000000..592ac043e
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-ml-score-model-out.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/iris-web-php.png b/heatwave-movie-stream/improve-app-hw/images/iris-web-php.png
new file mode 100644
index 000000000..afde7c7c9
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/iris-web-php.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/list-item-media.png b/heatwave-movie-stream/improve-app-hw/images/list-item-media.png
new file mode 100644
index 000000000..f62fb4e89
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/list-item-media.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/list-profiles.png b/heatwave-movie-stream/improve-app-hw/images/list-profiles.png
new file mode 100644
index 000000000..618cd8c60
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/list-profiles.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/moviehub-app-item-images.png b/heatwave-movie-stream/improve-app-hw/images/moviehub-app-item-images.png
new file mode 100644
index 000000000..4247e1f0b
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/moviehub-app-item-images.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/moviehub-app-profile-images.png b/heatwave-movie-stream/improve-app-hw/images/moviehub-app-profile-images.png
new file mode 100644
index 000000000..8268cf756
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/moviehub-app-profile-images.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mydbchart-out.png b/heatwave-movie-stream/improve-app-hw/images/mydbchart-out.png
new file mode 100644
index 000000000..86815d3db
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mydbchart-out.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-create-in-progress.png b/heatwave-movie-stream/improve-app-hw/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..f5fada590
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-detail-active.png b/heatwave-movie-stream/improve-app-hw/images/mysql-detail-active.png
new file mode 100644
index 000000000..8b6f6bcdc
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-detail-ip.png b/heatwave-movie-stream/improve-app-hw/images/mysql-detail-ip.png
new file mode 100644
index 000000000..e22a147ca
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-detail-ip.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-endpoint-private-ip.png b/heatwave-movie-stream/improve-app-hw/images/mysql-endpoint-private-ip.png
new file mode 100644
index 000000000..d7a48c263
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-endpoint-private-ip.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo copy.jpg b/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo copy.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo copy.jpg differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-install-shell.png b/heatwave-movie-stream/improve-app-hw/images/mysql-install-shell.png
new file mode 100644
index 000000000..e9da02bab
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-install-shell.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-load-data.png b/heatwave-movie-stream/improve-app-hw/images/mysql-load-data.png
new file mode 100644
index 000000000..49a6b2e41
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-load-data.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/mysql-shell-first-connect.png b/heatwave-movie-stream/improve-app-hw/images/mysql-shell-first-connect.png
new file mode 100644
index 000000000..385608d9d
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/mysql-shell-first-connect.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/navigation-compute-with-instance.png b/heatwave-movie-stream/improve-app-hw/images/navigation-compute-with-instance.png
new file mode 100644
index 000000000..1f584c8d2
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/navigation-compute-with-instance.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/navigation-compute.png b/heatwave-movie-stream/improve-app-hw/images/navigation-compute.png
new file mode 100644
index 000000000..2a68015f2
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/navigation-compute.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/navigation-mysql-with-instance.png b/heatwave-movie-stream/improve-app-hw/images/navigation-mysql-with-instance.png
new file mode 100644
index 000000000..654d9c7f7
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/navigation-mysql-with-instance.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key-compute-db.png b/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key-compute-db.png
new file mode 100644
index 000000000..7301c3770
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key-compute-db.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key.png b/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key.png
new file mode 100644
index 000000000..5631c3ee3
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/notepad-rsa-key.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/oci-console-buckets.png b/heatwave-movie-stream/improve-app-hw/images/oci-console-buckets.png
new file mode 100644
index 000000000..59f8764cc
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/oci-console-buckets.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/pre-authenticated-request-url.png b/heatwave-movie-stream/improve-app-hw/images/pre-authenticated-request-url.png
new file mode 100644
index 000000000..3478e8798
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/pre-authenticated-request-url.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/search-movie-title.png b/heatwave-movie-stream/improve-app-hw/images/search-movie-title.png
new file mode 100644
index 000000000..7a22f483b
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/search-movie-title.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/shh-key-list.png b/heatwave-movie-stream/improve-app-hw/images/shh-key-list.png
new file mode 100644
index 000000000..7cab02ea6
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/shh-key-list.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/show-ml-data.png b/heatwave-movie-stream/improve-app-hw/images/show-ml-data.png
new file mode 100644
index 000000000..077b17826
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/show-ml-data.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/ssh-key-display-minimize.png b/heatwave-movie-stream/improve-app-hw/images/ssh-key-display-minimize.png
new file mode 100644
index 000000000..0ce418b59
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/ssh-key-display-minimize.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/ssh-key-display.png b/heatwave-movie-stream/improve-app-hw/images/ssh-key-display.png
new file mode 100644
index 000000000..33a54b68b
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/ssh-key-display.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/ssh-key-show.png b/heatwave-movie-stream/improve-app-hw/images/ssh-key-show.png
new file mode 100644
index 000000000..722879314
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/ssh-key-show.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/update-legend-profiles.png b/heatwave-movie-stream/improve-app-hw/images/update-legend-profiles.png
new file mode 100644
index 000000000..27bccb168
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/update-legend-profiles.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/update-url-images-all-profiles.png b/heatwave-movie-stream/improve-app-hw/images/update-url-images-all-profiles.png
new file mode 100644
index 000000000..253cc9af6
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/update-url-images-all-profiles.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/update-url-images-items.png b/heatwave-movie-stream/improve-app-hw/images/update-url-images-items.png
new file mode 100644
index 000000000..be759c2a3
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/update-url-images-items.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/update-url-images-profiles.png b/heatwave-movie-stream/improve-app-hw/images/update-url-images-profiles.png
new file mode 100644
index 000000000..1a51a0b2a
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/update-url-images-profiles.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/images/upload-images-bucket.png b/heatwave-movie-stream/improve-app-hw/images/upload-images-bucket.png
new file mode 100644
index 000000000..eafcd11b6
Binary files /dev/null and b/heatwave-movie-stream/improve-app-hw/images/upload-images-bucket.png differ
diff --git a/heatwave-movie-stream/improve-app-hw/improve-app-hw.md b/heatwave-movie-stream/improve-app-hw/improve-app-hw.md
new file mode 100644
index 000000000..72cada351
--- /dev/null
+++ b/heatwave-movie-stream/improve-app-hw/improve-app-hw.md
@@ -0,0 +1,194 @@
+# (Bonus) Add your images to the MovieHub App for display
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+In this lab you will be guided into adding your own images into the OCI Object Store and display them in your APEX **MovieHub App**
+
+
+_Estimated Lab Time:_ 15 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Upload your images to a bucket in OCI Object Store **(Please note that due to the very nature of machine learning, recommendations evolve over time and therefore images provided may not always match the movies displayed.)**
+- Create pre-authenticated requests for your image files
+- Update the media columns in the profiles and item_media tables
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Must Complete Lab 7
+
+## Task 1: Download sample display images from a Bucket in OCI Object Store:
+
+1. Click on this link to Download the images [MovieHub Sample Images](https://objectstorage.us-phoenix-1.oraclecloud.com/p/Uim7lrT2O4eMuGunwu608ejFy-nlvNTtfEBNbElXaaAwTafZn2QveR6kgWJE5atV/n/idazzjlcjqzj/b/bucket-images/o/moviehub_imgs.zip) from the Demo to your local machine
+
+## Task 2: Upload Images to the OCI Object Store:
+
+1. Open the OCI Console
+
+2. Click the **Navigation Menu** in the upper left, navigate to **Storage** and select **Buckets**.
+
+![OCI Console Buckets ](./images/oci-console-buckets.png "oci-console-buckets ")
+
+3. On the Buckets page, select the **movies** compartment. Click **Create Bucket**.
+
+![Create Bucket ](./images/create-bucket.png "create-bucket ")
+
+4. Scroll down in the MovieHub-images bucket page
+
+5. Click **Upload** to upload objects to the bucket. Upload the images you want to showcase in the app.
+
+![Upload Images Bucket ](./images/upload-images-bucket.png "upload-images-bucket ")
+
+## Task 3: Create Pre-Authenticated Requests for each image
+
+1. Click on the three dots in the far right of an object.
+
+2. Click **Create Pre-Authenticated Request**
+
+ a. Select **Permit object reads** on Access Type.
+
+ b. Choose an Expiration date
+
+ c. Click **Create Pre-Authenticated Request**
+
+ ![Create Pre-Authenticated Request detail ](./images/create-pre-authenticated-requests-detail.png "create-pre-authenticated-requests detail ")
+
+ d. Copy the generated URL into a text file. Notice the warning message. **The URL will not be shown again**
+
+ ![Pre-Authenticated Request URL ](./images/pre-authenticated-request-url.png "pre-authenticated-request-url ")
+
+
+## Task 4: Connect with MySQL Shell:
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. On the command line, connect to MySQL using the MySQL Shell client tool with the following command:
+
+ ```bash
+ mysqlsh -uadmin -p -h 10.... -P3306 --sql
+ ```
+
+ ![Connect](./images/heatwave-load-shell.png "heatwave-load-shell ")
+
+3. Make sure you are in the movies schema
+
+ a. Enter the following command at the prompt
+
+ ```bash
+ USE movies;
+ ```
+
+## Task 5: Update the media columns with the generated Pre-Authenticated Requests for your images
+
+1. Update the profiles images for users
+
+ a. List the current users attributes
+
+ ```bash
+ SELECT * FROM profiles;
+ ```
+ ![List Profiles](./images/list-profiles.png =60%x* "list-profiles ")
+
+ b. Enter the following command at the prompt.
+
+ Replace **Pre-Auth-URL** with the Pre-Authenticated Request image you want use and **USER** with the corresponding user.
+
+ ```bash
+ UPDATE profiles SET media='Pre-Auth-URL' WHERE user='USER';
+ ```
+
+ ![Update url images profiles ](./images/update-url-images-profiles.png "update-url-images-profiles ")
+
+ c. You can also add a Legend to display in the app. You can choose a phrase you like
+
+ ```bash
+ UPDATE profiles SET legend='I Love Horror Movies!' WHERE user='USER';
+ ```
+
+ ![Update legend profiles ](./images/update-legend-profiles.png "update-legend-profiles ")
+
+ d. After you add a URL for every profile, your profile table will look like this
+
+ ![Update url images profiles ](./images/update-url-images-all-profiles.png "update-url-images-all-profiles ")
+
+
+2. Update the movie images for the items
+
+ a. List the current item_media attributes
+
+ ```bash
+ SELECT * FROM item_media LIMIT 5;
+ ```
+
+ ![List Item Media](./images/list-item-media.png =80%x* "list-item-media ")
+
+ b. You can search for specific movie id or title with the following query. Replace **Story** with your search tearm
+
+ ```bash
+ SELECT * FROM item_media where mov_title like '%Story%';
+ ```
+
+ ![Search Movie Title ](./images/search-movie-title.png "search-movie-title ")
+
+ c. Enter the following command at the prompt to update the url column for each movie you want to add an image.
+
+ Replace **Pre-Auth-URL** with the Pre-Authenticated Request image you want use and **IMAGE\_ID** with the corresponding image_id.
+
+ ```bash
+ UPDATE item_media SET url_down='Pre-Auth-URL' WHERE image_id='IMAGE_ID';
+ ```
+
+ ![Update url images items ](./images/update-url-images-items.png "update-url-images-items ")
+
+## Task 6: See the changes in the MovieHub App
+
+1. Login into to your Oracle APEX workspace
+
+2. Run and log into the imported app
+
+ a. Click on **Run**
+
+ ![APEX workspace menu with app](./images/apex-workspace-moviehub-menu.png "apex-workspace-moviehub-menu ")
+
+3. Explore the Profiles page with the added images
+
+ a. Go to the 'My Profiles page'
+
+ When adding images URLs and legends to the profiles table, your profiles page will look like this
+
+ ![MovieHub App profile images](./images/moviehub-app-profile-images.png "moviehub-app-profile-images ")
+
+4. Explore the Users Recommendations pages with the added images
+
+ a. Go to one of the user's page
+
+ When adding images URLs to the item\_media table, your recommendations pages will look like this
+
+ ![MovieHub App item images](./images/moviehub-app-item-images.png "moviehub-app-item-images ")
+
+
+## Learn More
+
+- [Oracle Autonomous Database Serverless Documentation](https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/index.html#Oracle%C2%AE-Cloud)
+- [Oracle APEX Rendering Objects Documentation](https://docs.oracle.com/en/database/oracle/apex/23.1/aexjs/apex.html)
+- [Oracle JavaScript Extension Toolkit (JET) API Reference Documentation](https://www.oracle.com/webfolder/technetwork/jet/jsdocs/index.html)
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.oracle.com/en-us/iaas/mysql-database/index.html)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/mys-hwaml-machine-learning.html)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
\ No newline at end of file
diff --git a/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture-compute.png b/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture-compute.png
new file mode 100644
index 000000000..8d817bb66
Binary files /dev/null and b/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture-compute.png differ
diff --git a/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture.png b/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture.png
new file mode 100644
index 000000000..a3bdbe034
Binary files /dev/null and b/heatwave-movie-stream/introduction/images/heatwave-bastion-architecture.png differ
diff --git a/heatwave-movie-stream/introduction/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/introduction/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/introduction/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/introduction/introduction.md b/heatwave-movie-stream/introduction/introduction.md
new file mode 100644
index 000000000..30bf67967
--- /dev/null
+++ b/heatwave-movie-stream/introduction/introduction.md
@@ -0,0 +1,52 @@
+# Introduction
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## About this Workshop
+
+Welcome to this workshop in which you’ll follow step-by-step instructions to build the MovieHub application powered by MySQL HeatWave. MovieHub is a fictitious movie streaming application that delivers personalized recommendations using machine learning. It leverages the built-in HeatWave AutoML recommender system to predict, for example, movies that a user will like, or to which users a given movie should be promoted. You’ll build this app using the most popular low-code development platform, Oracle APEX, which will also enable you to create analytics dashboards in the application. You’ll develop a few scenarios both from the user's and the administrator's perspective.
+
+_Estimated Lab Time:_ 3.5 hours
+
+_Lab Setup
+
+![heatwave architecture](./images/heatwave-bastion-architecture-compute.png "heatwave bastion -architecture compute ")
+
+## About Product/Technology
+
+MySQL HeatWave is the only cloud service that combines transactions, real-time analytics across data warehouses and data lakes, and machine learning in one MySQL Database—without the complexity, latency, risks, and cost of ETL duplication. It delivers unmatched performance and price-performance. HeatWave AutoML enables in-database machine learning, allowing you to build, train, deploy, and explain machine learning models within MySQL HeatWave. You do not need to move the data to a separate ML cloud service, or be an ML expert. MySQL Autopilot provides machine learning-powered automation that improves the performance, scalability, and ease of use of HeatWave, saving developers and DBAs significant time. The service can be deployed in OCI, AWS, Azure, in a hybrid environment, and in customers’ data centers with OCI Dedicated Region.
+
+## Objectives
+
+In this workshop, you will use OCI, MySQL HeatWave, and Oracle APEX to build the MovieHub application and generate personalized recommendations.
+
+1. Create MySQL HeatWave Database System
+2. Setup a HeatWave Cluster for OLAP/AutoML
+3. Create Bastion Server for MySQL Data
+4. Download & Transform the MovieLens dataset files
+5. Add MovieLens data to MySQL HeatWave
+6. Create and test HeatWave AutoML Recommender System
+7. Create the base Movies Database Tables for the Movie App
+8. Query Information from the movies and predictions tables
+9. Create a Low Code Application with Oracle APEX and REST SERVICES for MySQL
+10. Setup the APEX Application and Workspace
+11. Explore the Movie Recommendation App with data inside MySQL HeatWave
+12. (Bonus) Add your images to the MovieHub App for display
+
+## Prerequisites
+
+- An Oracle Free Tier, Paid or LiveLabs Cloud Account
+- Some Experience with MySQL Shell - [MySQL Site](https://dev.MySQL.com/doc/MySQL-shell/8.0/en/).
+
+You may now **proceed to the next lab**
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
+
+- **Dataset** - F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets:
+History and Context. ACM Transactions on Interactive Intelligent
+Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages.
+DOI=http://dx.doi.org/10.1145/2827872
\ No newline at end of file
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/compartment-create.png b/heatwave-movie-stream/query-from-movies-predictions/images/compartment-create.png
new file mode 100644
index 000000000..dbf4b82e5
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/compartment-create.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/five-scifi-movies.png b/heatwave-movie-stream/query-from-movies-predictions/images/five-scifi-movies.png
new file mode 100644
index 000000000..e8a6edd34
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/five-scifi-movies.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/heatwave-load-shell.png b/heatwave-movie-stream/query-from-movies-predictions/images/heatwave-load-shell.png
new file mode 100644
index 000000000..04d2ac581
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/heatwave-load-shell.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-database-mysql.png b/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-database-mysql.png
new file mode 100644
index 000000000..56a4cbf99
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-database-mysql.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-networking-vcn.png b/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-networking-vcn.png
new file mode 100644
index 000000000..fd0f97a7d
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/home-menu-networking-vcn.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/homepage.png b/heatwave-movie-stream/query-from-movies-predictions/images/homepage.png
new file mode 100644
index 000000000..ed605fe19
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/homepage.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-admin.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-admin.png
new file mode 100644
index 000000000..7c02e570e
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-admin.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-advanced.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-advanced.png
new file mode 100644
index 000000000..f095e37bc
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-advanced.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-backup.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-backup.png
new file mode 100644
index 000000000..369948421
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-backup.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-db-hardware.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-db-hardware.png
new file mode 100644
index 000000000..7709b02cf
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-db-hardware.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.pn.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.pn.png
new file mode 100644
index 000000000..eea116fdd
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.pn.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.png
new file mode 100644
index 000000000..6fee5da87
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-in-progress.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-info-setup.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-info-setup.png
new file mode 100644
index 000000000..80e48395f
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-info-setup.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-network-ad.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-network-ad.png
new file mode 100644
index 000000000..deaabbc9a
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-network-ad.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-option-develpment.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-option-develpment.png
new file mode 100644
index 000000000..d361a4c66
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create-option-develpment.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create.png
new file mode 100644
index 000000000..ea89e355a
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-create.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-active.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-active.png
new file mode 100644
index 000000000..dab9bc249
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-active.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-endpoint.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-endpoint.png
new file mode 100644
index 000000000..1db6bebdb
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-detail-endpoint.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/mysql-menu.png b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-menu.png
new file mode 100644
index 000000000..f024ebe47
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/mysql-menu.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/query-from-movies-predictions.png b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-movies-predictions.png
new file mode 100644
index 000000000..e8a6edd34
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-movies-predictions.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user20-predictions.png b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user20-predictions.png
new file mode 100644
index 000000000..a9a197b20
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user20-predictions.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-item-predictions.png b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-item-predictions.png
new file mode 100644
index 000000000..db32c707e
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-item-predictions.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-predictions.png b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-predictions.png
new file mode 100644
index 000000000..552c55554
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/query-from-user21-predictions.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/user-prediction-tables-list.png b/heatwave-movie-stream/query-from-movies-predictions/images/user-prediction-tables-list.png
new file mode 100644
index 000000000..7b55e1483
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/user-prediction-tables-list.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-list.png b/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-list.png
new file mode 100644
index 000000000..1b98e1dd6
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-list.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-row-counts.png b/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-row-counts.png
new file mode 100644
index 000000000..7e8cd2bc9
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/user-supporting-tables-row-counts.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-details-subnet.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-details-subnet.png
new file mode 100644
index 000000000..00867d576
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-details-subnet.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-menu-compartmen-turbo.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-menu-compartmen-turbo.png
new file mode 100644
index 000000000..6ca34cd34
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-menu-compartmen-turbo.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-display.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-display.png
new file mode 100644
index 000000000..d5b3fef83
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-display.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-rules-mysql.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-rules-mysql.png
new file mode 100644
index 000000000..069ff7833
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress-rules-mysql.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress.png
new file mode 100644
index 000000000..0eae8ff0e
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list-ingress.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list.png
new file mode 100644
index 000000000..c07a28de4
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-private-security-list.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-add-ingress.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-add-ingress.png
new file mode 100644
index 000000000..e8b331ed2
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-add-ingress.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-ingress-completed.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-ingress-completed.png
new file mode 100644
index 000000000..ae4400bc4
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-ttp-ingress-completed.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-compartment.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-compartment.png
new file mode 100644
index 000000000..3c687d399
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-compartment.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-create.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-create.png
new file mode 100644
index 000000000..7ada18b54
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-create.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-menu.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-menu.png
new file mode 100644
index 000000000..0b6443585
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-menu.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-start.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-start.png
new file mode 100644
index 000000000..ca88b2565
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-start.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-view.png b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-view.png
new file mode 100644
index 000000000..1014cbc93
Binary files /dev/null and b/heatwave-movie-stream/query-from-movies-predictions/images/vcn-wizard-view.png differ
diff --git a/heatwave-movie-stream/query-from-movies-predictions/query-from-movies-predictions.md b/heatwave-movie-stream/query-from-movies-predictions/query-from-movies-predictions.md
new file mode 100644
index 000000000..617772453
--- /dev/null
+++ b/heatwave-movie-stream/query-from-movies-predictions/query-from-movies-predictions.md
@@ -0,0 +1,191 @@
+# Query Information from the movies and predictions tables
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+HeatWave ML makes it easy to use machine learning, whether you are a novice user or an experienced ML practitioner. You provide the data, and HeatWave AutoML analyzes the characteristics of the data and creates an optimized machine learning model that you can use to generate predictions and explanations. An ML model makes predictions by identifying patterns in your data and applying those patterns to unseen data. HeatWave ML explanations help you understand how predictions are made, such as which features of a dataset contribute most to a prediction.
+
+In this lab, you will query information from the movie dataset tables and predictions tables with MySQL HeatWave
+
+_Estimated Time:_ 10 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Query information from the movie dataset tables
+- Query information from users and items prediction tables
+- Query information JOINING different tables
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Completed Lab 7
+
+## Task 1: Connect MySQL Shell:
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. On the command line, connect to MySQL using the MySQL Shell client tool with the following command:
+
+ ```bash
+ mysqlsh -uadmin -p -h 10.... -P3306 --sql
+ ```
+
+ ![Connect](./images/heatwave-load-shell.png "heatwave-load-shell ")
+
+3. Select the **movies** database
+
+ ```bash
+ USE movies;
+ ```
+
+## Task 2: Query information from the movie dataset tables
+
+1. Select 5 Sci-Fi movies from the item table
+
+ ```bash
+
+ SELECT item_id as 'Movie id', title as 'Title', release_year as 'Year',
+ CONCAT(
+ IF(genre_action = 1, CONCAT('action,'), CONCAT('')),
+ IF(genre_adventure = 1, CONCAT('adventure,'), CONCAT('')),
+ IF(genre_animation = 1, CONCAT('animation,'), CONCAT('')),
+ IF(genre_children = 1, CONCAT('children,'), CONCAT('')),
+ IF(genre_comedy = 1, CONCAT('comedy,'), CONCAT('')),
+ IF(genre_crime = 1, CONCAT('crime,'), CONCAT('')),
+ IF(genre_documentary = 1, CONCAT('documentary,'), CONCAT('')),
+ IF(genre_drama = 1, CONCAT('drama,'), ''),
+ IF(genre_fantasy = 1, CONCAT('fantasy,'), CONCAT('')),
+ IF(genre_filmnoir = 1, CONCAT('filmnoir,'), CONCAT('')),
+ IF(genre_horror = 1, CONCAT('horror,'), CONCAT('')),
+ IF(genre_musical = 1, CONCAT('musical,'), CONCAT('')),
+ IF(genre_mystery = 1, CONCAT('mystery,'), CONCAT('')),
+ IF(genre_romance = 1, CONCAT('romance,'), CONCAT('')),
+ IF(genre_scifi = 1, CONCAT('scifi,'), CONCAT('')),
+ IF(genre_thriller = 1, CONCAT('thriller,'), CONCAT('')),
+ IF(genre_unknown = 1, CONCAT('unknown,'), CONCAT('')),
+ IF(genre_war = 1, CONCAT('war,'), CONCAT('')),
+ IF(genre_western = 1, CONCAT('western,'), CONCAT(''))
+ ) AS 'Genres'
+ FROM movies.item WHERE genre_scifi=1 LIMIT 5;
+
+ ```
+ ![5 Sci-Fi Movies](./images/five-scifi-movies.png "five-scifi-movies ")
+
+## Task 4: Query information from users and items prediction tables
+
+1. Select the top 10 movie predictions from the user 20 with the different models
+
+ a. Select top 10 items from the pred\_user\_20\_0r. This table has the predictions for the original data, **data0** table
+
+ ```bash
+
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_20_0r ORDER BY ml_results DESC LIMIT 10;
+
+ ```
+ ![Query from User 20 prediction tables](./images/query-from-user20-predictions.png "query-from-user20-predictions ")
+
+ b. Select top 10 items from the pred\_user\_20\_15r. This table has the predictions when adding 15 records to the original data, **data1** table
+
+ ```bash
+
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_20_15r ORDER BY ml_results DESC LIMIT 10;
+
+ ```
+
+ c. Select top 10 items from the pred\_user\_20\_15r. This table has the predictions when adding 30 records to the original data, **data2** table
+
+ ```bash
+
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_20_30r ORDER BY ml_results DESC LIMIT 10;
+
+ ```
+
+2. Now, select the top 10 movie predictions from the user 21 with the different models
+
+ a.
+
+ ```bash
+
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_21_0r ORDER BY ml_results DESC LIMIT 10;
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_21_15r ORDER BY ml_results DESC LIMIT 10;
+ SELECT user_id, item_id, ml_results FROM movies.pred_user_21_30r ORDER BY ml_results DESC LIMIT 10;
+
+ ```
+
+ b. Hit **ENTER** to execute the last command
+
+ ![Query from User 21 prediction tables](./images/query-from-user21-predictions.png "query-from-user21-predictions ")
+
+## Task 5: Query information JOINING different tables
+
+You can use the predictions tables and the dataset tables in a JOIN to get better results.
+This Query will join information from the **item** table with the predictions table
+
+1. Select the top 3 movie predictions from the user 20 that belong to both 'Romance' and 'Drama' Genres
+
+ ```bash
+ SELECT
+ q.item_id AS `Movie ID`,
+ q.Title,
+ q.`Release Year`,
+ TRIM(',' FROM q.Genres) AS Genres,
+ m.ml_results AS 'Recommendation Rating'
+ FROM movies.pred_user_20_15r m
+ JOIN (
+ SELECT
+ item_id,
+ title AS 'Title',
+ release_year AS 'Release Year',
+ CONCAT(
+ IF(genre_action = 1, CONCAT('action,'), CONCAT('')),
+ IF(genre_adventure = 1, CONCAT('adventure,'), CONCAT('')),
+ IF(genre_animation = 1, CONCAT('animation,'), CONCAT('')),
+ IF(genre_children = 1, CONCAT('children,'), CONCAT('')),
+ IF(genre_comedy = 1, CONCAT('comedy,'), CONCAT('')),
+ IF(genre_crime = 1, CONCAT('crime,'), CONCAT('')),
+ IF(genre_documentary = 1, CONCAT('documentary,'), CONCAT('')),
+ IF(genre_drama = 1, CONCAT('drama,'), ''),
+ IF(genre_fantasy = 1, CONCAT('fantasy,'), CONCAT('')),
+ IF(genre_filmnoir = 1, CONCAT('filmnoir,'), CONCAT('')),
+ IF(genre_horror = 1, CONCAT('horror,'), CONCAT('')),
+ IF(genre_musical = 1, CONCAT('musical,'), CONCAT('')),
+ IF(genre_mystery = 1, CONCAT('mystery,'), CONCAT('')),
+ IF(genre_romance = 1, CONCAT('romance,'), CONCAT('')),
+ IF(genre_scifi = 1, CONCAT('scifi,'), CONCAT('')),
+ IF(genre_thriller = 1, CONCAT('thriller,'), CONCAT('')),
+ IF(genre_unknown = 1, CONCAT('unknown,'), CONCAT('')),
+ IF(genre_war = 1, CONCAT('war,'), CONCAT('')),
+ IF(genre_western = 1, CONCAT('western,'), CONCAT(''))
+ ) AS 'Genres'
+ FROM movies.item WHERE genre_romance=1 and genre_drama=1
+ ) q ON m.item_id = q.item_id
+ ORDER BY m.ml_results DESC, q.Title DESC
+ LIMIT 3;
+
+ ```
+
+ ![Top 3 recommended Romance and Drama movies](./images/query-from-user21-item-predictions.png "query-from-user21-item-predictions ")
+
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.oracle.com/en-us/iaas/mysql-database/index.html)
+- [MySQL HeatWave ML Documentation] (https://dev.mysql.com/doc/heatwave/en/mys-hwaml-machine-learning.html)
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/cloudshell-console-drawer.png b/heatwave-movie-stream/setup-hw-cluster/images/cloudshell-console-drawer.png
new file mode 100644
index 000000000..9037cb40e
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/cloudshell-console-drawer.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/compute-connect.png b/heatwave-movie-stream/setup-hw-cluster/images/compute-connect.png
new file mode 100644
index 000000000..c28b1c4aa
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/compute-connect.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-cluster.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-cluster.png
new file mode 100644
index 000000000..0e9010e09
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-cluster.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-node.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-node.png
new file mode 100644
index 000000000..fdbe38029
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-apply-node.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-change-shape.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-change-shape.png
new file mode 100644
index 000000000..9c7b0d009
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-change-shape.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-estimate-node.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-estimate-node.png
new file mode 100644
index 000000000..d10750829
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-cluster-estimate-node.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-creating-cluster.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-creating-cluster.png
new file mode 100644
index 000000000..d7340ce08
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-creating-cluster.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-estimate-node.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-estimate-node.png
new file mode 100644
index 000000000..ceddcb6f5
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-estimate-node.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-generate-estimate.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-generate-estimate.png
new file mode 100644
index 000000000..ae407644f
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-generate-estimate.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-architecture.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-architecture.png
new file mode 100644
index 000000000..ac173bfbc
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-architecture.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-autopilot-loadtable.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-autopilot-loadtable.png
new file mode 100644
index 000000000..0c833d25a
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-autopilot-loadtable.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-complete.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-complete.png
new file mode 100644
index 000000000..d9c384c09
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-complete.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-features.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-features.png
new file mode 100644
index 000000000..bcd57044a
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load-features.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load.png
new file mode 100644
index 000000000..cd673c0f0
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-load.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-more-actions-add-cluster.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-more-actions-add-cluster.png
new file mode 100644
index 000000000..d52231738
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-more-actions-add-cluster.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-performance-schema.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-performance-schema.png
new file mode 100644
index 000000000..fbebf1be2
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-performance-schema.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/mysql-shell-start.png b/heatwave-movie-stream/setup-hw-cluster/images/mysql-shell-start.png
new file mode 100644
index 000000000..61494b0d1
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/mysql-shell-start.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/navigation-compute-with-instance.png b/heatwave-movie-stream/setup-hw-cluster/images/navigation-compute-with-instance.png
new file mode 100644
index 000000000..1f584c8d2
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/navigation-compute-with-instance.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/images/navigation-mysql-with-instance.png b/heatwave-movie-stream/setup-hw-cluster/images/navigation-mysql-with-instance.png
new file mode 100644
index 000000000..66212b8cf
Binary files /dev/null and b/heatwave-movie-stream/setup-hw-cluster/images/navigation-mysql-with-instance.png differ
diff --git a/heatwave-movie-stream/setup-hw-cluster/setup-hw-cluster.md b/heatwave-movie-stream/setup-hw-cluster/setup-hw-cluster.md
new file mode 100644
index 000000000..80d45108d
--- /dev/null
+++ b/heatwave-movie-stream/setup-hw-cluster/setup-hw-cluster.md
@@ -0,0 +1,58 @@
+# Setup a HeatWave Cluster for OLAP/AutoML
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+A HeatWave cluster comprise of a MySQL DB System node and one or more HeatWave nodes. The MySQL DB System node includes a plugin that is responsible for cluster management, loading data into the HeatWave cluster, query scheduling, and returning query result.
+
+![heatwave architect](./images/mysql-heatwave-architecture.png "heatwave architect ")
+
+_Estimated Time:_ 10 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following task:
+
+- Add a HeatWave Cluster to MySQL Database System
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with MySQL Shell
+- Completed Lab 2
+
+## Task 1: Add a HeatWave Cluster to MDS-HW MySQL Database System
+
+1. Open the navigation menu
+ - Databases
+ - MySQL
+ - DB Systems
+2. Choose the **movies** Compartment. A list of DB Systems is displayed.
+ ![navigation mysql with instance](./images/navigation-mysql-with-instance.png "navigation mysql with instance")
+
+3. In the list of DB Systems, click the **HW-MovieHub** system. click **More Action -> Add HeatWave Cluster**.
+ ![mysql more actions add cluster](./images/mysql-more-actions-add-cluster.png " mysql more actions add cluster")
+
+4. On the “Add HeatWave Cluster” dialog, select the “HeatWave.512GB” shape
+
+5. Click “Add HeatWave Cluster” to create the HeatWave cluster
+ ![mysql apply cluster](./images/mysql-apply-cluster.png " mysql apply cluster")
+
+6. HeatWave creation will take about 10 minutes. From the DB display page scroll down to the Resources section. Click the **HeatWave** link. Your completed HeatWave Cluster Information section will look like this:
+ ![mysql creating cluster](./images/mysql-creating-cluster.png "mysql creating cluster ")
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Oracle Cloud Infrastructure MySQL Database Service Documentation](https://docs.cloud.oracle.com/en-us/iaas/MySQL-database)
+- [MySQL Database Documentation](https://www.MySQL.com)
+
+You may now **proceed to the next lab**
+
+## Acknowledgements
+
+- **Author** - Perside Foster, MySQL Principal Solution Engineering
+- **Contributors** - Mandy Pang, MySQL Principal Product Manager, Nick Mader, MySQL Global Channel Enablement & Strategy Manager, Cristian Aguilar, MySQL Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, October 2023
\ No newline at end of file
diff --git a/heatwave-movie-stream/transform-data/images/inspect-item-file.png b/heatwave-movie-stream/transform-data/images/inspect-item-file.png
new file mode 100644
index 000000000..32d741f08
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/inspect-item-file.png differ
diff --git a/heatwave-movie-stream/transform-data/images/list-files-movies.png b/heatwave-movie-stream/transform-data/images/list-files-movies.png
new file mode 100644
index 000000000..9c6f8cd89
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/list-files-movies.png differ
diff --git a/heatwave-movie-stream/transform-data/images/mysql-heatwave-logo.jpg b/heatwave-movie-stream/transform-data/images/mysql-heatwave-logo.jpg
new file mode 100644
index 000000000..87b8bae92
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/mysql-heatwave-logo.jpg differ
diff --git a/heatwave-movie-stream/transform-data/images/output-script-csv.png b/heatwave-movie-stream/transform-data/images/output-script-csv.png
new file mode 100644
index 000000000..d5bf8c268
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/output-script-csv.png differ
diff --git a/heatwave-movie-stream/transform-data/images/output-script-sql.png b/heatwave-movie-stream/transform-data/images/output-script-sql.png
new file mode 100644
index 000000000..15bff0984
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/output-script-sql.png differ
diff --git a/heatwave-movie-stream/transform-data/images/result-item-sql-file.png b/heatwave-movie-stream/transform-data/images/result-item-sql-file.png
new file mode 100644
index 000000000..d5cd3f309
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/result-item-sql-file.png differ
diff --git a/heatwave-movie-stream/transform-data/images/unzip-movielens-files.png b/heatwave-movie-stream/transform-data/images/unzip-movielens-files.png
new file mode 100644
index 000000000..01030f7cc
Binary files /dev/null and b/heatwave-movie-stream/transform-data/images/unzip-movielens-files.png differ
diff --git a/heatwave-movie-stream/transform-data/transform-data.md b/heatwave-movie-stream/transform-data/transform-data.md
new file mode 100644
index 000000000..de6d31d70
--- /dev/null
+++ b/heatwave-movie-stream/transform-data/transform-data.md
@@ -0,0 +1,193 @@
+# Download & Transform the MovieLens dataset files
+
+![mysql heatwave](./images/mysql-heatwave-logo.jpg "mysql heatwave")
+
+## Introduction
+
+In this lab, you will download the dataset used to train the recommender system model in MySQL. You will use Python and Pandas to transform the original dataset into MySQL compatible format.
+
+The Dataset is the MovieLens100k by GroupLens. Click the following link for an overview of the MovieLens100k dataset files:
+
+- [README file for the MovieLens dataset](https://files.grouplens.org/datasets/movielens/ml-100k-README.txt)
+
+_Estimated Time:_ 10 minutes
+
+### Objectives
+
+In this lab, you will be guided through the following tasks:
+
+- Downloading the GroupLens MovieLens100k Dataset
+- Preparing the data and Transforming the files to CSV using Python
+- Transforming CSV files to MySQL, SQL files
+
+### Prerequisites
+
+- An Oracle Trial or Paid Cloud Account
+- Some Experience with Linux and Python
+- Completed Lab 3
+
+## Task 1: Download the movie dataset
+
+1. Go to Cloud shell to SSH into the new Compute Instance
+
+ (Example: **ssh -i ~/.ssh/id_rsa opc@132.145.170...**)
+
+ ```bash
+ ssh -i ~/.ssh/id_rsa opc@
+ ```
+
+2. Download the MovieLens 100k Dataset:
+
+ Go to [Grouplens](https://grouplens.org/datasets/movielens/100k/)
+
+ Get the download url for the zip file 'ml-100k.zip'
+
+ Download the file into your home directory
+
+ ```bash
+
+ sudo wget https://files.grouplens.org/datasets/movielens/ml-100k.zip
+
+ ```
+
+ ```bash
+
+ ls
+
+ ```
+
+ Unzip the file
+
+ ```bash
+
+ unzip ml-100k.zip
+
+ ```
+
+ ```bash
+
+ ls
+
+ ```
+
+ ![unzip movielens files](./images/unzip-movielens-files.png "unzip-movielens-files")
+ Delete the unnecessary files
+
+ ```bash
+
+ cd ml-100k
+
+ ```
+
+ ```bash
+
+ ls
+
+ ```
+
+ ```bash
+
+ rm *.pl *.sh *.base *.test u.genre u.occupation
+
+ ```
+
+## Task 2: Download the scripts
+
+1. Download the Python scripts
+
+ In the same newly folder created, download the scripts
+
+ Enter the following command at the prompt
+
+ ```bash
+ sudo wget https://objectstorage.us-phoenix-1.oraclecloud.com/p/uaOgU_UDi0OIvgvS1R0-UPSD9PqK0UXHtojya5VZrrFtTbssGq_8dhNNmmkUnFyb/n/idazzjlcjqzj/b/bucket-images/o/scripts.zip
+ ```
+
+ Unzip the application code. Be sure to include the -j option to avoid creating a new folder.
+
+ ```bash
+ sudo unzip -j scripts.zip
+ ```
+
+2. List the files in the folder
+
+ ```bash
+ ls -l
+ ```
+
+ ![list files movies](./images/list-files-movies.png "list-files-movies ")
+
+## Task 3: Inspect the u.'name' files
+
+1. Open the u.item file
+
+ ```bash
+ nano u.item
+ ```
+
+2. Notice the name and structure of the file
+
+ ![inspect item file](./images/inspect-item-file.png "inspect-item-file ")
+
+3. Exit nano without saving any changes with **Ctrl + X**
+
+## Task 4: Run the scripts
+
+1. Run the script to transform the u.'name' files to CSV
+
+ Enter the following command at the prompt
+
+ ```bash
+ python3 movies_transform_csv_l.py
+ ```
+
+ ```bash
+ ls
+ ```
+
+ It should produce an output like this:
+
+ ![output script csv](./images/output-script-csv.png "output-script-csv")
+
+2. Run the script to transform the CSV files to SQL
+
+ Enter the following command at the prompt
+
+ ```bash
+ python3 movies_transform_sql_l.py
+ ```
+
+ ```bash
+ ls
+ ```
+
+ It should produce an output like this:
+
+ ![output script sql](./images/output-script-sql.png "output-script-sql")
+
+3. Check the resulting SQL Files
+
+ a. Open the item.sql file
+
+ ```bash
+ nano item.sql
+ ```
+
+ You should see a file like this, that includes the data in sql INSERT statements:
+
+ ![result item sql file](./images/result-item-sql-file.png "result-item-sql-file")
+
+ b. Exit nano without saving any changes with **Ctrl + X**
+
+You may now **proceed to the next lab**
+
+## Acknowledgements
+
+- **Author** - Cristian Aguilar, MySQL Solution Engineering
+- **Contributors** - Perside Foster, MySQL Principal Solution Engineering
+- **Last Updated By/Date** - Cristian Aguilar, MySQL Solution Engineering, November 2023
+
+- **Dataset** - F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets:
+History and Context. ACM Transactions on Interactive Intelligent
+Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages.
+DOI=http://dx.doi.org/10.1145/2827872
\ No newline at end of file
diff --git a/heatwave-movie-stream/workshops/freetier/index.html b/heatwave-movie-stream/workshops/freetier/index.html
new file mode 100644
index 000000000..177a93623
--- /dev/null
+++ b/heatwave-movie-stream/workshops/freetier/index.html
@@ -0,0 +1,61 @@
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/heatwave-movie-stream/workshops/freetier/manifest.json b/heatwave-movie-stream/workshops/freetier/manifest.json
new file mode 100644
index 000000000..d07b6c867
--- /dev/null
+++ b/heatwave-movie-stream/workshops/freetier/manifest.json
@@ -0,0 +1,84 @@
+{
+ "workshoptitle": "Build a Movie Recommendation App with Machine Learning in MySQL HeatWave",
+ "help": "livelabs-help-oci_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "The Introduction is always second. The title and contents menu title match for the Introduction.",
+ "filename": "../../introduction/introduction.md"
+ },
+
+ {
+ "title": "Get Started",
+ "description": "This is the prerequisites for customers using Free Trial and Paid tenancies. The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/pre-register-free-tier-account.md"
+ },
+
+ {
+ "title": "Lab 1: Create MySQL HeatWave Database System",
+ "filename": "../../create-db/create-db.md"
+ },
+
+ {
+ "title": "Lab 2: Setup a HeatWave Cluster for OLAP/AutoML",
+ "filename": "../../setup-hw-cluster/setup-hw-cluster.md"
+ },
+
+ {
+ "title": "Lab 3: Create Bastion Server for MySQL Data",
+ "filename": "../../create-bastion-with-python/create-bastion-with-python.md"
+ },
+
+ {
+ "title": "Lab 4: Download & Transform the MovieLens dataset files",
+ "filename": "../../transform-data/transform-data.md"
+ },
+
+ {
+ "title": "Lab 5: Add MovieLens data to MySQL HeatWave",
+ "filename": "../../add-data-mysql/add-data-mysql.md"
+ },
+
+ {
+ "title": "Lab 6: Create and test HeatWave AutoML Recommender System",
+ "filename": "../../create-automl/create-automl.md"
+ },
+
+ {
+ "title": "Lab 7: Create the base Movies Database Tables for the Movie App",
+ "filename": "../../create-movie-tables/create-movie-tables.md"
+ },
+
+ {
+ "title": "Lab 8: Query Information from the movies and predictions tables",
+ "filename": "../../query-from-movies-predictions/query-from-movies-predictions.md"
+ },
+
+ {
+ "title": "Lab 9: Create a Low Code Application with Oracle APEX and REST SERVICES for MySQL",
+ "filename": "../../apex-heatwave/apex-heatwave.md"
+ },
+
+ {
+ "title": "Lab 10: Setup the APEX Application and Workspace",
+ "filename": "../../app-configure-apex/app-configure-apex.md"
+ },
+
+ {
+ "title": "Lab 11: Explore the Movie Recommendation App with data inside MySQL HeatWave",
+ "filename": "../../develop-moviehub-apex-app/develop-moviehub-apex-app.md"
+ },
+
+ {
+ "title": "Lab 12: (Bonus) Add your images to the MovieHub App for display",
+ "filename": "../../improve-app-hw/improve-app-hw.md"
+ },
+
+ {
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename":"https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md"
+ }
+ ]
+ }
+
diff --git a/hitchhikers-guide-upgrade-to-19c-2-0/00-prepare-setup/00-prepare-setup.md b/hitchhikers-guide-upgrade-to-19c-2-0/00-prepare-setup/00-prepare-setup.md
index 680091b86..3cb2983b3 100644
--- a/hitchhikers-guide-upgrade-to-19c-2-0/00-prepare-setup/00-prepare-setup.md
+++ b/hitchhikers-guide-upgrade-to-19c-2-0/00-prepare-setup/00-prepare-setup.md
@@ -1,4 +1,4 @@
-# Prepare Setup Daniel Was Here
+# Prepare Setup
## Introduction
@@ -19,7 +19,7 @@ This lab assumes you have:
## Task 1: Download Oracle Resource Manager (ORM) stack zip file
-1. Click on the link below to download the Resource Manager zip file you need to build your environment: [upgr2db19c-mkplc-freetier.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/7x622_b5P2kJ5NnOo6fEg2u1Ez-UsH1KdO7u-974LcaydzFh6X2TjDv86lEafzGT/n/natdsecurity/b/stack/o/upgr2db19c-mkplc-freetier.zip)
+1. Click on the link below to download the Resource Manager zip file you need to build your environment: [upgr19c-23c.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/upgrade-and-patching/upgr19c-23c.zip)
2. Save in your downloads folder.
@@ -29,10 +29,10 @@ We strongly recommend using this stack to create a self-contained/dedicated VCN
This workshop requires a certain number of ports to be available, a requirement that can be met by using the default ORM stack execution that creates a dedicated VCN. In order to use an existing VCN the following ports should be added to Egress rules
-| Port |Description |
-| :------------- | :------------------------------------ |
-| 22 | SSH |
-| 6080 | Remote Desktop noVNC () |
+| Port | Description |
+| :--- | :---------------------- |
+| 22 | SSH |
+| 6080 | Remote Desktop noVNC () |
1. Go to *Networking >> Virtual Cloud Networks*
diff --git a/hitchhikers-guide-upgrade-to-19c-2-0/06-spa/06-spa.md b/hitchhikers-guide-upgrade-to-19c-2-0/06-spa/06-spa.md
index 7029ce1c8..f258e7fdd 100644
--- a/hitchhikers-guide-upgrade-to-19c-2-0/06-spa/06-spa.md
+++ b/hitchhikers-guide-upgrade-to-19c-2-0/06-spa/06-spa.md
@@ -21,7 +21,7 @@ This lab assumes:
## Task 1: Check statements
-1. Use the yelloe terminal. Set the environment and connect to the upgraded UPGR database.
+1. Use the yellow terminal. Set the environment and connect to the upgraded UPGR database.
```
diff --git a/hitchhikers-guide-upgrade-to-19c-2-0/workshops/freetier/manifest.json b/hitchhikers-guide-upgrade-to-19c-2-0/workshops/freetier/manifest.json
index 66aa8632f..f27bf4d3e 100644
--- a/hitchhikers-guide-upgrade-to-19c-2-0/workshops/freetier/manifest.json
+++ b/hitchhikers-guide-upgrade-to-19c-2-0/workshops/freetier/manifest.json
@@ -11,7 +11,7 @@
{
"title": "Get Started",
"description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
- "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login-livelabs2.md"
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
},
{
"title": "Prepare Setup",
@@ -76,4 +76,4 @@
"filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md"
}
]
-}
+}
\ No newline at end of file
diff --git a/kubernetes-for-oracledbas/access-cluster/access-cluster.md b/kubernetes-for-oracledbas/access-cluster/access-cluster.md
index 210df83be..94440a173 100644
--- a/kubernetes-for-oracledbas/access-cluster/access-cluster.md
+++ b/kubernetes-for-oracledbas/access-cluster/access-cluster.md
@@ -43,7 +43,7 @@ This lab assumes you have:
### Notes about the Kubeconfig File
-The authentication token generated by the command in the kubeconfig file are short-lived, cluster-scoped, and specific to your account. As a result, you cannot share this kubeconfig file with other users to access the Kubernetes cluster.
+The authentication token generated by the command in the kubeconfig file is short-lived, cluster-scoped, and specific to your account. As a result, you cannot share this kubeconfig file with other users to access the Kubernetes cluster.
> the authentication token could expire resulting in an error
@@ -113,28 +113,28 @@ In an Oracle Database, schema's provide a mechanism for isolating database objec
```
-3. Create a new *Namespace* called `sqldev-web`:
+3. Create a new *Namespace* called `ords`:
```bash
- kubectl create namespace sqldev-web
+ kubectl create namespace ords
```
-4. Create a new context that points to the `sqldev-web` *namespace*:
+4. Create a new context that points to the `ords` *namespace*:
```bash
- kubectl config set-context sqldev-web \
- --namespace=sqldev-web \
+ kubectl config set-context ords \
+ --namespace=ords \
--cluster=$(kubectl config get-clusters | tail -1) \
--user=$(kubectl config get-users | tail -1)
```
- You'll use the `sqldev-web` *namespace* later in the Workshop to deploy your Microservice Application.
+ You'll use the `ords` *namespace* later in the Workshop to deploy your Microservice Application.
-5. You should now have two contexts, one named `demo` and one named `sqldev-web`:
+5. You should now have two contexts, one named `demo` and one named `ords`:
```bash
@@ -157,6 +157,18 @@ In an Oracle Database, schema's provide a mechanism for isolating database objec
For Production clusters, you may consider storing its context in an entirely different kubeconfig file to limit access and prevent mistakes. Using the `production` context would be a matter of setting the `KUBECONFIG` environment variable to its location.
+### Fun Fact?
+
+You'll often see example commands online where `kubectl` is shortened to just `k`: `k get contexts; k create namespace ords`, etc.
+
+This is a heavily used practice implemented by creating an alias:
+
+```bash
+
+alias k="kubectl"
+
+```
+
You may now **proceed to the next lab**
## Learn More
diff --git a/kubernetes-for-oracledbas/access-cluster/images/contexts.png b/kubernetes-for-oracledbas/access-cluster/images/contexts.png
index 23ecb31ef..45fe01712 100644
Binary files a/kubernetes-for-oracledbas/access-cluster/images/contexts.png and b/kubernetes-for-oracledbas/access-cluster/images/contexts.png differ
diff --git a/kubernetes-for-oracledbas/bind-adb/bind-adb.md b/kubernetes-for-oracledbas/bind-adb/bind-adb.md
index 5cad4b5ba..e96172d00 100644
--- a/kubernetes-for-oracledbas/bind-adb/bind-adb.md
+++ b/kubernetes-for-oracledbas/bind-adb/bind-adb.md
@@ -101,7 +101,11 @@ If it were set to `true` then deleting the resource from Kubernetes *WOULD* dele
![kubectl get AutonomousDatabase adb-existing](images/kubectl_get_adb.png "kubectl get AutonomousDatabase adb-existing")
-2. Describe the `adb-existing` resource (`kubectl describe [-n ]`) to get more details. Use the resource_type alias `adb` for `AutonomousDatabase` to save some typing. You can view all the resource_type alias short names by running: `kubectl api-resources`
+2. Describe the `adb-existing` resource (`kubectl describe [-n ]`) to get more details.
+
+ Use the resource_type alias `adb` for `AutonomousDatabase` to save some typing.
+
+ You can view all the resource_type alias short names by running: `kubectl api-resources`
```bash
@@ -257,7 +261,7 @@ Now that you've defined two *Secrets* in Kubernetes, redefine the `adb-existing`
![kubectl describe secrets adb-tns-admin](images/adb_tns_admin.png "kubectl describe secrets adb-tns-admin")
- You'll see what equates to a `TNS_ADMIN` directory, and in fact, this *Secret* will be used by applications for just that purpose.
+ You'll see what equates to a `TNS_ADMIN` directory, and in fact, this *Secret* can be used by Microservice applications for just that purpose.
You may now **proceed to the next lab**
diff --git a/kubernetes-for-oracledbas/deploy-application/deploy-application.md b/kubernetes-for-oracledbas/deploy-application/deploy-application.md
index 85beb3764..7fc4bd985 100644
--- a/kubernetes-for-oracledbas/deploy-application/deploy-application.md
+++ b/kubernetes-for-oracledbas/deploy-application/deploy-application.md
@@ -25,15 +25,15 @@ This lab assumes you have:
## Task 1: Switch Context
-In the [Access the Kubernetes Cluster](?lab=access-cluster#task3changethedefaultnamespacecontext) Lab, you created a new `sqldev-web` namespace and a *Context* to set it as the working *Namespace*.
+In the [Access the Kubernetes Cluster](?lab=access-cluster#task3changethedefaultnamespacecontext) Lab, you created a new `ords` namespace and a *Context* to set it as the working *Namespace*.
-You will use the `sqldev-web` *namespace* for your Application while the ADB resource resides in the `default` *namespace*. This is to illustrate how different teams (Developers and DBAs) can manage their resources in their own "virtual clusters", reducing the impact they have on each other, and to allow additional security via Role Based Access Controls (*RBAC*).
+You will use the `ords` *namespace* for your Application while the ADB resource resides in the `default` *namespace*. This is to illustrate how different teams (Developers and DBAs) can manage their resources in their own "virtual clusters", reducing the impact they have on each other, and to allow additional security via Role Based Access Controls (*RBAC*).
-1. Switch to `sqldev-web` context:
+1. Switch to `ords` context:
```bash
- kubectl config use-context sqldev-web
+ kubectl config use-context ords
```
@@ -51,7 +51,7 @@ Your application will want to talk to the Oracle Database and to do so, just lik
### Names Resolution
-For the Database (Names) Resolution, copy the wallet *Secret* from the `default` *namespace* to the `sqlweb-dev` *namespace*.
+For the Database (Names) Resolution, copy the wallet *Secret* from the `default` *namespace* to the `ords` *namespace*.
1. This can be done with a `kubectl` one-liner:
@@ -59,21 +59,21 @@ For the Database (Names) Resolution, copy the wallet *Secret* from the `default`
kubectl get secret adb-tns-admin -n default -o json |
jq 'del(.metadata | .ownerReferences, .namespace, .resourceVersion, .uid)' |
- kubectl apply -n sqldev-web -f -
+ kubectl apply -n ords -f -
```
- The above command will export the `adb-tns-admin` *Secret* from the `default` *namespace* to JSON, exclude some metadata fields, and load the *Secret* back into the Kubernetes `sqldev-web` *namespace*.
+ The above command will export the `adb-tns-admin` *Secret* from the `default` *namespace* to JSON, exclude some metadata fields, and load the *Secret* back into the Kubernetes `ords` *namespace*... a sort of `CREATE TABLE.. AS SELECT...` operation.
2. Query the new *Secret*:
```bash
- kubectl get secrets -n sqldev-web
+ kubectl get secrets -n ords
```
- ![ADB Copy Secret](images/adb_sqldev.png "ADB Copy Secret")
+ ![ADB Copy Secret](images/adb_get_secrets.png "ADB Copy Secret")
### Authentication
@@ -95,7 +95,7 @@ Start a *manifest file* for the Application Deployment.
```bash
- cat > sqldev-web.yaml << EOF
+ cat > ords.yaml << EOF
---
apiVersion: v1
kind: Secret
@@ -119,35 +119,52 @@ A *ConfigMap* is like a *Secret* but to store non-confidential data. *Pods* can
### ORDS Configuration
-The ORDS configuration does not store any sensitive data, so build a *manifest file* to create a *ConfigMap* of its configuration file. The *ConfigMap* will be mounted as a file into the Container and used by the ORDS process to start the application.
+The ORDS configuration does not store any sensitive data, so append to the *manifest file* two *ConfigMap*s of its configuration. The *ConfigMap*s will be mounted as files into the Container and used by the ORDS process to start the application and make a connection to the database.
1. Append the `ords-config` *ConfigMap* to the Application Deployment *manifest file*:
```bash
- cat >> sqldev-web.yaml << EOF
+ cat >> ords.yaml << EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
- name: ords-config
+ name: ords-default-config
labels:
- name: ords-config
+ name: ords-default-config
+ data:
+ settings.xml: |-
+
+
+
+ true
+ true
+ 10
+ 100
+ true
+ X-Forwarded-Proto: https
+ /
+ 8080
+ /i
+
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: ords-pool-config
+ labels:
+ name: ords-pool-config
data:
pool.xml: |-
- true
- true
- tns
- /opt/oracle/ords/network/admin
- ${SERVICE_NAME}
- ORDS_PUBLIC_USER_K8
- proxied
- true
- 50
- 10
+ tns
+ ${SERVICE_NAME}
+ /opt/oracle/network/admin
+ ORDS_PUBLIC_USER_K8S
+ proxied
EOF
@@ -163,73 +180,79 @@ An *initContainer* is just like an regular application container, except it will
![initContainer](images/initContainer.png "initContainer")
-The below *ConfigMap* will create two new users in the ADB: `ORDS_PUBLIC_USER_K8` and `ORDS_PLSQL_GATEWAY_K8`. It will also grant the required permissions for them to run the SQL Developer Web application.
+The below *ConfigMap* will create two new users in the ADB: `ORDS_PUBLIC_USER_K8S` and `ORDS_PLSQL_GATEWAY_K8S`. It will also grant the required permissions for them to run the ORDS Microservice application.
1. Append the *ConfigMap* to your application manifest:
```yaml
- cat >> sqldev-web.yaml << EOF
+ cat >> ords.yaml << EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
- name: liquibase-changelog
+ name: liquibase
data:
liquibase.sql: "liquibase update -chf changelog.sql"
changelog.sql: |-
-- liquibase formatted sql
- -- changeset gotsysdba:1 endDelimiter:/
+ -- changeset gotsysdba:create_users endDelimiter:/ runAlways:true
DECLARE
- L_USER VARCHAR2(255);
- BEGIN
- BEGIN
- SELECT USERNAME INTO L_USER FROM DBA_USERS WHERE USERNAME='ORDS_PUBLIC_USER_K8';
- execute immediate 'ALTER USER "ORDS_PUBLIC_USER_K8" IDENTIFIED BY "\${ORDS_PWD}"';
- EXCEPTION WHEN NO_DATA_FOUND THEN
- execute immediate 'CREATE USER "ORDS_PUBLIC_USER_K8" IDENTIFIED BY "\${ORDS_PWD}"';
- END;
- BEGIN
- SELECT USERNAME INTO L_USER FROM DBA_USERS WHERE USERNAME='ORDS_PLSQL_GATEWAY_K8';
- execute immediate 'ALTER USER "ORDS_PLSQL_GATEWAY_K8" IDENTIFIED BY "\${ORDS_PWD}"';
- EXCEPTION WHEN NO_DATA_FOUND THEN
- execute immediate 'CREATE USER "ORDS_PLSQL_GATEWAY_K8" IDENTIFIED BY "\${ORDS_PWD}"';
- END;
- END;
- /
- --rollback drop user "ORDS_PUBLIC_USER_K8" cascade;
- --rollback drop user "ORDS_PLSQL_GATEWAY_K8" cascade;
-
- -- changeset gotsysdba:2
- GRANT CONNECT TO ORDS_PUBLIC_USER_K8;
- ALTER USER ORDS_PUBLIC_USER_K8 PROFILE ORA_APP_PROFILE;
- GRANT CONNECT TO ORDS_PLSQL_GATEWAY_K8;
- ALTER USER ORDS_PLSQL_GATEWAY_K8 PROFILE ORA_APP_PROFILE;
- ALTER USER ORDS_PLSQL_GATEWAY_K8 GRANT CONNECT THROUGH ORDS_PUBLIC_USER_K8;
-
- -- changeset gotsysdba:3 endDelimiter:/
+ l_user VARCHAR2(255);
+ l_cdn VARCHAR2(255);
BEGIN
- ORDS_ADMIN.PROVISION_RUNTIME_ROLE (
- p_user => 'ORDS_PUBLIC_USER_K8',
- p_proxy_enabled_schemas => TRUE
- );
- END;
- /
-
- -- changeset gotsysdba:4 endDelimiter:/
- BEGIN
- ORDS_ADMIN.CONFIG_PLSQL_GATEWAY (
- p_runtime_user => 'ORDS_PUBLIC_USER_K8',
- p_plsql_gateway_user => 'ORDS_PLSQL_GATEWAY_K8'
- );
+ BEGIN
+ SELECT USERNAME INTO l_user FROM DBA_USERS WHERE USERNAME='ORDS_PUBLIC_USER_K8S';
+ EXECUTE IMMEDIATE 'ALTER USER "ORDS_PUBLIC_USER_K8S" PROFILE ORA_APP_PROFILE';
+ EXECUTE IMMEDIATE 'ALTER USER "ORDS_PUBLIC_USER_K8S" IDENTIFIED BY "\${ORDS_PWD}"';
+ EXCEPTION
+ WHEN NO_DATA_FOUND THEN
+ EXECUTE IMMEDIATE 'CREATE USER "ORDS_PUBLIC_USER_K8S" IDENTIFIED BY "\${ORDS_PWD}" PROFILE ORA_APP_PROFILE';
+ END;
+ EXECUTE IMMEDIATE 'GRANT CONNECT TO "ORDS_PUBLIC_USER_K8S"';
+ BEGIN
+ SELECT USERNAME INTO l_user FROM DBA_USERS WHERE USERNAME='ORDS_PLSQL_GATEWAY_K8S';
+ EXECUTE IMMEDIATE 'ALTER USER "ORDS_PLSQL_GATEWAY_K8S" PROFILE DEFAULT';
+ EXECUTE IMMEDIATE 'ALTER USER "ORDS_PLSQL_GATEWAY_K8S" NO AUTHENTICATION';
+ EXCEPTION
+ WHEN NO_DATA_FOUND THEN
+ EXECUTE IMMEDIATE 'CREATE USER "ORDS_PLSQL_GATEWAY_K8S" NO AUTHENTICATION PROFILE DEFAULT';
+ END;
+ EXECUTE IMMEDIATE 'GRANT CONNECT TO "ORDS_PLSQL_GATEWAY_K8S"';
+ EXECUTE IMMEDIATE 'ALTER USER "ORDS_PLSQL_GATEWAY_K8S" GRANT CONNECT THROUGH "ORDS_PUBLIC_USER_K8S"';
+ ORDS_ADMIN.PROVISION_RUNTIME_ROLE (
+ p_user => 'ORDS_PUBLIC_USER_K8S'
+ ,p_proxy_enabled_schemas => TRUE
+ );
+ ORDS_ADMIN.CONFIG_PLSQL_GATEWAY (
+ p_runtime_user => 'ORDS_PUBLIC_USER_K8S'
+ ,p_plsql_gateway_user => 'ORDS_PLSQL_GATEWAY_K8S'
+ );
+
+ BEGIN
+ SELECT images_version INTO L_CDN
+ FROM APEX_PATCHES
+ where is_bundle_patch = 'Yes'
+ order by patch_version desc
+ fetch first 1 rows only;
+ EXCEPTION WHEN NO_DATA_FOUND THEN
+ select version_no INTO L_CDN
+ from APEX_RELEASE;
+ END;
+ apex_instance_admin.set_parameter(
+ p_parameter => 'IMAGE_PREFIX',
+ p_value => 'https://static.oracle.com/cdn/apex/'||L_CDN||'/'
+ );
END;
/
+ --rollback drop user "ORDS_PUBLIC_USER_K8S" cascade;
+ --rollback drop user "ORDS_PLSQL_GATEWAY_K8S" cascade;
EOF
```
-By using a variable for the passwords, you are not exposing any sensitive information in your code. The value for the variable will be set using environment variables in the Applications *Deployment* specification, which will pull the values from the *Secret* you createdenv.
+By using a variable for the passwords, you are not exposing any sensitive information in your code. The value for the variable will be set using environment variables in the Applications *Deployment* specification, which will pull the values from the *Secret* you created.
## Task 5: Create the Application
@@ -237,91 +260,94 @@ Finally, define the Application *Deployment* manifest itself. It looks like a l
1. Create a *StatefulSet*
- For Lab purposes only, define this Application as a *StatefulSet* to ensure the names of the *Pods* are predictable. Call this application `sqldev-web` with a single *Pod* as defined by *replicas*. Create *Volumes* of the *ConfigMaps* and *Secrets* so the application can mount them into its containers. The purpose of the other keys will be explored later in the Lab.
+ For Lab purposes only, define this Application as a *StatefulSet* to ensure the names of the *Pods* are predictable. Call this application `ords` with a single *Pod* as defined by *replicas*. Create *Volumes* of the *ConfigMaps* and *Secrets* so the application can mount them into its containers. The purpose of the other keys will be explored later in the Lab.
```bash
- cat >> sqldev-web.yaml << EOF
+ cat >> ords.yaml << EOF
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
- name: sqldev-web
+ name: ords
spec:
replicas: 1
selector:
matchLabels:
- app: sqldev-web
+ app.kubernetes.io/name: ords
template:
metadata:
labels:
- app: sqldev-web
+ app.kubernetes.io/name: ords
spec:
volumes:
- - name: ords-config
+ - name: ords-default-config
configMap:
- name: ords-config
- - name: ords-wallet
- emptyDir: {}
- - name: liquibase-changelog
+ name: ords-default-config
+ - name: ords-pool-config
configMap:
- name: liquibase-changelog
+ name: ords-pool-config
+ - name: liquibase
+ configMap:
+ name: liquibase
- name: tns-admin
secret:
- secretName: adb-tns-admin
+ secretName: "adb-tns-admin"
+ - name: ords-wallet
+ emptyDir: {}
EOF
```
2. Add the *initContainers*.
- This is the **Liquibase** container that will startup before the the `containers` section. It will *VolumeMount* the `adb-tns-admin` *Secret* to the `/opt/oracle/network/admin` directory and `liquibase-changelog` *ConfigMap* to the `/opt/oracle/network/admin` inside the Container. It will then pull the `SQLcl` image from Oracle's Container Registry and run the `liquibase.sql` against the database defined in the `db-secret` *Secret*.
+ This is the **Liquibase** container that will startup before the the `containers` section. It will *VolumeMount* the `adb-tns-admin` *Secret* to the `/opt/oracle/network/admin` directory and `liquibase` *ConfigMap* to the `/opt/oracle/network/admin` inside the Container. It will then pull the `SQLcl` image from Oracle's Container Registry and run the `liquibase.sql` against the database defined in the `db-secret` *Secret*.
```bash
- cat >> sqldev-web.yaml << EOF
+ cat >> ords.yaml << EOF
initContainers:
- - name: liquibase
- image: container-registry.oracle.com/database/sqlcl:[](var:sqlcl_version)
- imagePullPolicy: IfNotPresent
- args: ["-L", "-nohistory", "\$(LB_COMMAND_USERNAME)/\$(LB_COMMAND_PASSWORD)@\$(LB_COMMAND_URL)", "@liquibase.sql"]
- env:
- - name: ORDS_PWD
- valueFrom:
- secretKeyRef:
- name: db-secrets
- key: ords.password
- - name: LB_COMMAND_SERVICE
- valueFrom:
- secretKeyRef:
- name: db-secrets
- key: db.service_name
- - name: LB_COMMAND_URL
- value: jdbc:oracle:thin:@\$(LB_COMMAND_SERVICE)?TNS_ADMIN=/opt/oracle/network/admin
- - name: LB_COMMAND_USERNAME
- valueFrom:
- secretKeyRef:
- name: db-secrets
- key: db.username
- - name: LB_COMMAND_PASSWORD
- valueFrom:
- secretKeyRef:
- name: db-secrets
- key: db.password
- volumeMounts:
- - mountPath: /opt/oracle/network/admin
- name: tns-admin
- readOnly: true
- - mountPath: /opt/oracle/sql_scripts
- name: liquibase-changelog
- readOnly: true
+ - name: liquibase
+ image: container-registry.oracle.com/database/sqlcl:latest
+ imagePullPolicy: IfNotPresent
+ args: ["-L", "-nohistory", "\$(LB_COMMAND_USERNAME)/\$(LB_COMMAND_PASSWORD)@\$(LB_COMMAND_URL)", "@liquibase.sql"]
+ env:
+ - name: LB_COMMAND_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: "db-secrets"
+ key: db.username
+ - name: LB_COMMAND_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: "db-secrets"
+ key: db.password
+ - name: DB_SERVICE
+ valueFrom:
+ secretKeyRef:
+ name: "db-secrets"
+ key: db.service_name
+ - name: LB_COMMAND_URL
+ value: jdbc:oracle:thin:@\$(DB_SERVICE)?TNS_ADMIN=/opt/oracle/network/admin
+ - name: ORDS_PWD
+ valueFrom:
+ secretKeyRef:
+ name: "db-secrets"
+ key: db.password
+ volumeMounts:
+ - mountPath: /opt/oracle/network/admin
+ name: tns-admin
+ readOnly: true
+ - mountPath: /opt/oracle/sql_scripts
+ name: liquibase
+ readOnly: true
EOF
```
3. Add the `container`, the application you are deploying.
- In addition to mounting the `adb-tns-admin` *Secret* to the `/opt/oracle/network/admin` directory for Names Resolution, it will also mount the `ords-config` *ConfigMap* to the `/home/oracle/ords/config` directory.
+ In addition to mounting the `adb-tns-admin` *Secret* to the `/opt/oracle/network/admin` directory for Names Resolution, it will also mount the *ConfigMap*s `ords-default-config` to the `/opt/oracle/standalone/config/global` and `ords-pool-config` to the `/opt/oracle/standalone/config/databases/default/` directories.
The *Pod* will download the `ORDS` image from Oracle's Container Registry, generate a wallet for the database password and startup the ORDS server in standalone mode.
@@ -329,11 +355,11 @@ Finally, define the Application *Deployment* manifest itself. It looks like a l
```bash
- cat >> sqldev-web.yaml << EOF
+ cat >> ords.yaml << EOF
containers:
- - image: "container-registry.oracle.com/database/ords:23.1.3"
+ - image: "container-registry.oracle.com/database/ords:23.3.0"
imagePullPolicy: IfNotPresent
- name: sqldev-web
+ name: ords
command:
- /bin/bash
- -c
@@ -341,37 +367,48 @@ Finally, define the Application *Deployment* manifest itself. It looks like a l
ords --config \$ORDS_CONFIG config secret --password-stdin db.password <<< \$ORDS_PWD;
ords --config \$ORDS_CONFIG serve
env:
- - name: IGNORE_APEX
- value: "TRUE"
- name: ORDS_CONFIG
- value: /home/oracle/ords/config
+ value: /opt/oracle/standalone/config
- name: ORACLE_HOME
- value: /opt/oracle/ords
- - name: ORDS_PWD
+ value: /opt/oracle
+ - name: TNS_ADMIN
+ value: /opt/oracle/network/admin
+ - name: DB_SERVICE
valueFrom:
secretKeyRef:
- name: db-secrets
- key: ords.password
- - name: LB_COMMAND_SERVICE
+ name: "db-secrets"
+ key: db.service_name
+ - name: ORDS_PWD
valueFrom:
secretKeyRef:
- name: db-secrets
- key: db.service_name
+ name: "db-secrets"
+ key: db.password
volumeMounts:
- - name: ords-config
- mountPath: "/home/oracle/ords/config/databases/default/"
+ - name: ords-default-config
+ mountPath: "/opt/oracle/standalone/config/global/"
+ readOnly: false
+ - name: ords-pool-config
+ mountPath: "/opt/oracle/standalone/config/databases/default/"
readOnly: true
- name: ords-wallet
- mountPath: "/home/oracle/ords/config/databases/default/wallet"
+ mountPath: "/opt/oracle/standalone/config/databases/default/wallet"
readOnly: false
- name: tns-admin
- mountPath: "/opt/oracle/ords/network/admin"
- readOnly: true
- - name: liquibase-changelog
- mountPath: "/opt/oracle/sql_scripts"
+ mountPath: "/opt/oracle/network/admin"
readOnly: true
+ readinessProbe:
+ tcpSocket:
+ port: 8080
+ initialDelaySeconds: 15
+ periodSeconds: 10
+ livenessProbe:
+ tcpSocket:
+ port: 8080
+ initialDelaySeconds: 15
+ periodSeconds: 10
ports:
- - containerPort: 8080
+ - name: ords-port
+ containerPort: 8080
securityContext:
capabilities:
drop:
@@ -380,20 +417,19 @@ Finally, define the Application *Deployment* manifest itself. It looks like a l
runAsUser: 54321
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
-
EOF
```
## Task 6: Deploy the Application
-You now have a single *manifest file* that will deploy everything you need for your application.
+You now have a single *manifest file* that will deploy everything you need for your application. You could take that file and deploy it into any Kubernetes cluster (such as UAT or PRD). Simply change the contents of the *Secrets* and *ConfigMaps* to make it enviroment specific.
1. Apply the *manifest file*:
```bash
- kubectl apply -f sqldev-web.yaml
+ kubectl apply -f ords.yaml
```
@@ -401,7 +437,7 @@ You now have a single *manifest file* that will deploy everything you need for y
```bash
- kubectl get pod/sqldev-web-0 -w
+ kubectl get pod/ords-0 -w
```
@@ -420,10 +456,10 @@ spec:
replicas: 1
selector:
matchLabels:
- app: sqldev-web
+ app.kubernetes.io/name: ords
```
-Only one *replica* was created, which translates to the single *Pod* `sqldev-web-0` in the *Namespace*. If you think of *replica's* as an instance in a RAC database, when you only have one it is easy to route traffic to it. However, if you have multiple instances and they can go up and down independently, ensuring High Availability, then you need something to keep track of those "Endpoints" for routing traffic. In a RAC, this is the SCAN Listener, in a K8s cluster, this is a *Service*.
+Only one *replica* was created, which translates to the single *Pod* `ords-0` in the *Namespace*. If you think of *replica's* as an instance in a RAC database, when you only have one it is easy to route traffic to it. However, if you have multiple instances and they can go up and down independently, ensuring High Availability, then you need something to keep track of those "Endpoints" for routing traffic. In a RAC, this is the SCAN Listener, in a K8s cluster, this is a *Service*.
1. Define the *Service* for your application, routing all traffic from port 80 to 8080 (the port the application is listening on).
@@ -431,20 +467,20 @@ Only one *replica* was created, which translates to the single *Pod* `sqldev-web
```bash
- cat > sqldev-web-service.yaml << EOF
+ cat > ords-service.yaml << EOF
---
apiVersion: v1
kind: Service
metadata:
- name: sqldev-web
+ name: ords-svc
spec:
selector:
- app: sqldev-web
+ app.kubernetes.io/name: ords
ports:
- - name: http
- port: 80
- targetPort: 8080
+ - name: ords-svc-port
protocol: TCP
+ port: 80
+ targetPort: ords-port
EOF
```
@@ -453,7 +489,7 @@ Only one *replica* was created, which translates to the single *Pod* `sqldev-web
```bash
- kubectl apply -f sqldev-web-service.yaml
+ kubectl apply -f ords-service.yaml
```
@@ -467,32 +503,49 @@ Only one *replica* was created, which translates to the single *Pod* `sqldev-web
![Application Service](images/app_service.png "Application Service")
+4. Query your *Service* to ensure it has picked up your application as an endpoint:
+
+ ```bash
+
+ kubectl get pods -o wide
+ kubectl describe service
+
+ ```
+
+ ![Application Service](images/service_endpoint.png "Application Service")
+
+**Bonus**: Scale up your *statefulset*. What happens to the Endpoints of the *Service*?
+
+
## Task 8: Create the Ingress
-The *Service* exposed the application to the Kubernetes Cluster, for you to access it from a Web Browser, it needs to be exposed outside the cluster. During the provisioning of the Stack, the **Ansible** portion deployed a Microservice Application called `ingress-nginx`. That service interacted with Oracle Cloud Infrastructure, via the *cloud-controller-manager* and spun up a LoadBalancer. To expose the application to the LoadBalancer, create an `Ingress` resource that will interact with the `ingress-nginx` Microservice to allow your application to be accessed from outside the cluster:
+The *Service* exposed the application to the Kubernetes Cluster, for you to access it from a Web Browser, it needs to be exposed outside the cluster. During the provisioning of the Stack, the **Ansible** portion deployed a Microservice Application called `ingress-nginx`. That Microservice interacted with Oracle Cloud Infrastructure, via the *cloud-controller-manager* and spun up a LoadBalancer. To expose the application to the LoadBalancer, create an `Ingress` resource that will interact with the `ingress-nginx` Microservice to allow your application to be accessed from outside the cluster:
1. Create the Ingress *manifest file*:
```bash
- cat > sqldev-web-ingress.yaml << EOF
+ cat > ords-ingress.yaml << EOF
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
- name: sqldev-web
+ name: ords-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/upstream-vhost: \$host
+ nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- - http:
- paths:
- - path: /
- pathType: Prefix
- backend:
- service:
- name: sqldev-web
- port:
- name: http
+ - http:
+ paths:
+ - path: /
+ pathType: ImplementationSpecific
+ backend:
+ service:
+ name: ords-svc
+ port:
+ name: ords-svc-port
EOF
```
@@ -501,7 +554,7 @@ The *Service* exposed the application to the Kubernetes Cluster, for you to acce
```bash
- kubectl apply -f sqldev-web-ingress.yaml
+ kubectl apply -f ords-ingress.yaml
```
@@ -519,21 +572,30 @@ The *Service* exposed the application to the Kubernetes Cluster, for you to acce
## Task 9: Access the Microservice Application
-In the output from the Ingress, copy the IP and visit: `http:///ords/sql-developer`:
+In the output from the Ingress, copy the IP and visit: `http://`. You will see a warning page about a self-signed TLS Certificate, accept the risk to view the ORDS landing page:
![Application Login](images/app_login.png "Application Login")
-Log into your Application and Explore!
+From here you can log into SQL Developer Web to explore your database, or log into your APEX and start designing a Low-Code application!
+
+Use the username `ADMIN` and the password retreived from the Kubernetes secret:
+
+```bash
+
+kubectl get secrets/adb-admin-password -n default --template="{{index .data \"adb-admin-password\" | base64decode}}"
+
+```
+
## Task 10: Delete the Microservice Application
-While you could delete the individual resources manually, or by using the *manifest file*, another way to delete this Microservice Application is to delete the *namespace* it is deployed in.
+While you could delete the individual resources manually, or by using the *manifest file*, another way to delete this Microservice Application is to delete the *namespace* it is deployed in. This is the equivalent of dropping a schema from the database.
-1. Delete the `sqldev-web` *Namespace*:
+1. Delete the `ords` *Namespace*:
```bash
- kubectl delete namespace sqldev-web
+ kubectl delete namespace ords
```
diff --git a/kubernetes-for-oracledbas/deploy-application/images/adb_get_secrets.png b/kubernetes-for-oracledbas/deploy-application/images/adb_get_secrets.png
new file mode 100644
index 000000000..859c7b89f
Binary files /dev/null and b/kubernetes-for-oracledbas/deploy-application/images/adb_get_secrets.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/adb_sqldev.png b/kubernetes-for-oracledbas/deploy-application/images/adb_sqldev.png
deleted file mode 100644
index ae2fcd6fe..000000000
Binary files a/kubernetes-for-oracledbas/deploy-application/images/adb_sqldev.png and /dev/null differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/app_ingress.png b/kubernetes-for-oracledbas/deploy-application/images/app_ingress.png
index 0272b3ecc..9eb7b0b57 100644
Binary files a/kubernetes-for-oracledbas/deploy-application/images/app_ingress.png and b/kubernetes-for-oracledbas/deploy-application/images/app_ingress.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/app_login.png b/kubernetes-for-oracledbas/deploy-application/images/app_login.png
index 7fad39f09..293050bb2 100644
Binary files a/kubernetes-for-oracledbas/deploy-application/images/app_login.png and b/kubernetes-for-oracledbas/deploy-application/images/app_login.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/app_service.png b/kubernetes-for-oracledbas/deploy-application/images/app_service.png
index 05ac54a26..38da25e79 100644
Binary files a/kubernetes-for-oracledbas/deploy-application/images/app_service.png and b/kubernetes-for-oracledbas/deploy-application/images/app_service.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/initContainer.png b/kubernetes-for-oracledbas/deploy-application/images/initContainer.png
index 9e00a0ae8..b4fb0ed33 100644
Binary files a/kubernetes-for-oracledbas/deploy-application/images/initContainer.png and b/kubernetes-for-oracledbas/deploy-application/images/initContainer.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/launch_app.png b/kubernetes-for-oracledbas/deploy-application/images/launch_app.png
index aa338e22b..c965c1963 100644
Binary files a/kubernetes-for-oracledbas/deploy-application/images/launch_app.png and b/kubernetes-for-oracledbas/deploy-application/images/launch_app.png differ
diff --git a/kubernetes-for-oracledbas/deploy-application/images/service_endpoint.png b/kubernetes-for-oracledbas/deploy-application/images/service_endpoint.png
new file mode 100644
index 000000000..5e2e6db25
Binary files /dev/null and b/kubernetes-for-oracledbas/deploy-application/images/service_endpoint.png differ
diff --git a/kubernetes-for-oracledbas/deploy-oraoperator/deploy-oraoperator.md b/kubernetes-for-oracledbas/deploy-oraoperator/deploy-oraoperator.md
index 0ae72c0fe..5dfaf6ae5 100644
--- a/kubernetes-for-oracledbas/deploy-oraoperator/deploy-oraoperator.md
+++ b/kubernetes-for-oracledbas/deploy-oraoperator/deploy-oraoperator.md
@@ -62,7 +62,7 @@ To see a *Controller* in action, you will delete pods resulting in a *Deployment
1. Your cluster comes with a built-in DNS server, **coredns**. The **coredns** pods are tied to a *Deployment* that stipulates there should be two **coredns** pods running (i.e. two *Replicas*) at all times.
- *Note*: The number of *Pods* may vary depending on the number of *Worker Nodes* in your cluster.
+ **Note**: The number of *Pods* may vary depending on the number of *Worker Nodes* in your cluster.
Take a look at the **coredns** deployment, it should show **2/2** Pods are in the desired **READY** state:
@@ -82,7 +82,7 @@ To see a *Controller* in action, you will delete pods resulting in a *Deployment
```
- Note their names, specifically the suffixed hash and their AGE.
+ Note their names, specifically the suffixed hash and their AGE.
3. Delete the *Pods* and re-query them:
@@ -146,7 +146,7 @@ To install the OraOperator, you will first need to install a dependency, **cert-
![kubectl get all -n oracle-database-operator-system](images/kubectl_oraoper.png "kubectl get all -n oracle-database-operator-system")
- The output shows a *Deployment* named `oracle-database-operator-controller-manager`. This is the **Operator's Custom Controller** manager which will watch your cluster to ensure any Oracle Database *CRDs* are in their desired state.
+ The output shows a *Deployment* named `oracle-database-operator-controller-manager`. This is the **Operator's Custom Controller** manager which will watch your Kubernetes cluster for any Oracle Database *CRDs* and ensure that they are always running in their desired state.
## Task 5: OraOperator CRDs
diff --git a/kubernetes-for-oracledbas/deploy-stack/deploy-stack.md b/kubernetes-for-oracledbas/deploy-stack/deploy-stack.md
index e504019ef..af59f7e12 100644
--- a/kubernetes-for-oracledbas/deploy-stack/deploy-stack.md
+++ b/kubernetes-for-oracledbas/deploy-stack/deploy-stack.md
@@ -73,7 +73,7 @@ For example, the IaC in this particular stack is used in two different OCI Marke
but it has been slimmed down, via variables, specifically for this workshop. This demonstrates how easy it is to modify infrastructure configurations, as needed, without requiring any changes to the underlying code.
-1. Tick the "Show Database Options?" to see what can be customised, but **please do not change any values**.
+1. Scroll down to "Database Options" and see what can be customised, but **please do not change any values**.
![Configuration Variables](./images/configuration_variables.png "Configuration Variables")
@@ -112,9 +112,9 @@ For the DBA this is invaluable as it means you can define the ADB once, use vari
As Terraform is declarative, that IaC can also be used to modify existing ADBs that were created by it, by comparing the configuration in the "State" file with the real-world resources.
-During the ORM interview phase, when you ticked the "Show Database Options?" the `Autonomous Database CPU Core Count` was set to `1`. That value was assigned to `var.adb_cpu_core_count` during provisioning.
+During the ORM interview phase, when you viewed the "Database Options", the `Autonomous Database ECPU Core Count` was set to `2`. That value was assigned to `var.adb_cpu_core_count` during provisioning.
-After the Stack has provisioned, you could "Edit" the Stack, change the database's CPU Core Count to `2`, Apply, and your ADB will be modified accordingly. Alternatively, if the ADB was modified outside of the IaC (someone has increased the CPU to `3`), it has "drifted" from the configuration stored in the "State". Running an **Apply** will reconcile that drift and modify the ADB back to desired state as defined in the IaC.
+After the Stack has provisioned, you could "Edit" the Stack, change the database's CPU Core Count to `3`, Apply, and your ADB will be modified accordingly. Alternatively, if the ADB was modified outside of the IaC (someone has increased the CPU to `4`), it has "drifted" from the configuration stored in the "State". Running an **Apply** will reconcile that drift and modify the ADB back to desired state as defined in the IaC.
### Other benefits of IaC
diff --git a/kubernetes-for-oracledbas/deploy-stack/images/configuration_variables.png b/kubernetes-for-oracledbas/deploy-stack/images/configuration_variables.png
index 013c0afb9..fc719195d 100644
Binary files a/kubernetes-for-oracledbas/deploy-stack/images/configuration_variables.png and b/kubernetes-for-oracledbas/deploy-stack/images/configuration_variables.png differ
diff --git a/kubernetes-for-oracledbas/explore-cluster/explore-cluster.md b/kubernetes-for-oracledbas/explore-cluster/explore-cluster.md
index ebe37365c..8e6b4fe16 100644
--- a/kubernetes-for-oracledbas/explore-cluster/explore-cluster.md
+++ b/kubernetes-for-oracledbas/explore-cluster/explore-cluster.md
@@ -32,7 +32,7 @@ This lab assumes you have:
```bash
- kubectl run your-pod --image=nginx --restart=Never
+ kubectl run your-pod --image=docker.io/nginx --restart=Never
```
@@ -151,7 +151,7 @@ When you interacted with the *kube-apiserver* to create `your-pod`, the *kube-sc
![kube-scheduler](images/kube-scheduler.png "kube-scheduler")
-The *kube-apiserver* then stored the information in *etcd* that "`your-pod` should run on nodeX," following that decision made by the *kube-scheduler*. The *kube-apiserver* then instructs the *kubelet* on `nodeX` to execute the actions against the `nodeX` *container runtime* to ensure `your-pod` is running, as it was defined, with the containers described in that *Pod*Spec.
+The *kube-apiserver* then stored the information in *etcd* that "`your-pod` should run on nodeX," following that decision made by the *kube-scheduler*. The *kube-apiserver* then instructs the *kubelet* on `nodeX` to execute the actions against the `nodeX` *container runtime* to ensure `your-pod` is running, as it was defined, with the containers described in that *Pod* "Spec".
1. Create a *manifest file* for `your-pod`:
@@ -167,13 +167,15 @@ The *kube-apiserver* then stored the information in *etcd* that "`your-pod` shou
spec:
containers:
- name: nginx
- image: nginx:latest
+ image: docker.io/nginx:latest
EOF
```
The *manifest file* states that you are using the "core" API `v1` to define a *Pod* named `your-pod`. The *Pod* will have one *container* called `nginx` running the `nginx:latest` image.
+ The `nginx:latest` image is being pulled directly from a Container Registry. In this case, the registry is [docker.io](https://www.docker.com/) but there are other public Registries available. Often your organisation will have its own Registry with their own custom images.
+
2. Create `your-pod` using the *manifest file*:
```bash
@@ -205,7 +207,7 @@ The *kube-apiserver* then stored the information in *etcd* that "`your-pod` shou
spec:
containers:
- name: nginx
- image: nginx:latest
+ image: docker.io/nginx:latest
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -276,7 +278,7 @@ In *Task 1* when you caused an unrecoverable failure of `your-pod` the applicati
spec:
containers:
- name: nginx
- image: nginx:latest
+ image: docker.io/nginx:latest
EOF
```
@@ -394,7 +396,7 @@ While running *Pods* is at the heart of Kubernetes, it is uncommon to run them d
spec:
containers:
- name: nginx
- image: nginx:1.14.2
+ image: docker.io/nginx:1.14.2
EOF
```
@@ -428,14 +430,16 @@ While running *Pods* is at the heart of Kubernetes, it is uncommon to run them d
```
```bash
-
+
kubectl apply -f your-pod-deployment.yaml && watch -n 1 -d kubectl get pod -l "tier=frontend"
```
If the `watch` was quick enough, you would have seen that the *Deployment* caused the upgrade to be rolled out. It ensured that the specified number of `replica` were always available, replacing *Pods* with the older `nginx` with *Pods* running the newer version in a graceful manner.
-5. Delete your *Deployment*
+5. Use `Ctrl-C` to exit the watch loop
+
+6. Delete your *Deployment*
```bash
diff --git a/kubernetes-for-oracledbas/lifecycle-adb/lifecycle-adb.md b/kubernetes-for-oracledbas/lifecycle-adb/lifecycle-adb.md
index b6568161f..19afe8938 100644
--- a/kubernetes-for-oracledbas/lifecycle-adb/lifecycle-adb.md
+++ b/kubernetes-for-oracledbas/lifecycle-adb/lifecycle-adb.md
@@ -386,7 +386,15 @@ This is especially useful for Autonomous Databases as when the database is STOPP
`kubectl logs job/`
-6. Start your ADB for future Labs:
+6. Delete the CronJob
+
+ ```bash
+
+ kubectl delete -f adb_cron.yaml
+
+ ```
+
+7. Start your ADB for future Labs:
```bash
diff --git a/kubernetes-for-oracledbas/prepare-oci/prepare-oci.md b/kubernetes-for-oracledbas/prepare-oci/prepare-oci.md
index 770d31c24..d5146f235 100644
--- a/kubernetes-for-oracledbas/prepare-oci/prepare-oci.md
+++ b/kubernetes-for-oracledbas/prepare-oci/prepare-oci.md
@@ -50,6 +50,10 @@ In the *Cloud Shell*, run the following commands to create a sub-*Compartment* t
You can think of a *Compartment* much like a database schema: a collection of tables, indexes, and other objects isolated from other schemas. By default, a root *Compartment* (think SYSTEM schema) was created for you when your tenancy was established. It is possible to create everything in the root *Compartment*, but Oracle recommends that you create sub-*Compartments* to help manage your resources more efficiently.
+### Fun Fact?
+
+Kubernetes is often shortened to `K8s` (kay eights) with the 8 standing for the number of letters between the “K” and the “s”.
+
## Task 3: Create a Group
A *Group* is a collection of cloud users who all need the same type of access to a particular set of resources or compartment.
@@ -169,7 +173,7 @@ Assign the cloud *User* who will be carrying out the remaining Labs to the *Grou
### Troubleshooting
-Your user account maybe in an IDCS Federated identity domain, in which case you will not be able to assign your user account to the IAM Group. The `oci iam group add-user`command will fail with a `ServiceError` message. If this is the case, please follow the [IAM User Error](?lab=troubleshooting#Task1:IAMUserError) guide.
+Your user account maybe in an IDCS Federated identity domain, in which case you will not be able to assign your user account to the IAM Group. The `oci iam group add-user` (Step 3) command will fail with a `ServiceError` message. If this is the case, please follow the [IAM User Error](?lab=troubleshooting#Task1:IAMUserError) guide.
You may now **proceed to the next lab**
diff --git a/kubernetes-for-oracledbas/troubleshooting/troubleshooting.md b/kubernetes-for-oracledbas/troubleshooting/troubleshooting.md
index 522bd11fd..5941842df 100644
--- a/kubernetes-for-oracledbas/troubleshooting/troubleshooting.md
+++ b/kubernetes-for-oracledbas/troubleshooting/troubleshooting.md
@@ -31,7 +31,7 @@ When [preparing the OCI Tenancy](?lab=prepare-oci#Task4:AssignUsertoGroup "Prepa
6. Log out of OCI and re-login as the new user to continue the rest of the Workshop
-The user can be delete after the workshop has been cleaned up.
+The user can be deleted after the workshop has been cleaned up.
## Task 2: Out of Capacity
diff --git a/microtx-xa-stock-broker-app/deploy-stock-trading-app/deploy-stock-trading-app.md b/microtx-xa-stock-broker-app/deploy-stock-trading-app/deploy-stock-trading-app.md
index 15bcd4b19..b1c7a12fc 100644
--- a/microtx-xa-stock-broker-app/deploy-stock-trading-app/deploy-stock-trading-app.md
+++ b/microtx-xa-stock-broker-app/deploy-stock-trading-app/deploy-stock-trading-app.md
@@ -4,9 +4,7 @@
The Bank and Stock-Trading application contains several microservices that interact with each other to complete a transaction. The Stock Broker microservice initiates the transactions to purchase and sell shares, so it is called a transaction initiator service. The Core Banking, Branch Banking, and User Banking services participate in the transactions related to the trade in stocks, so they are called participant services.
-To deploy the application, you must build each microservice as a container image and provide the deployment details in a YAML file.
-
-Estimated Time: 20 minutes
+Estimated Time: 12 minutes
### Objectives
@@ -14,7 +12,7 @@ In this lab, you will:
* Configure Minikube and start a Minikube tunnel
* Configure Keycloak
-* Build container images for each microservice in the Bank and Stock-Trading application. After building the container images, the images are available in your Minikube container registry.
+* Build container image for the Stock Broker microservice.
* Update the `values.yaml` file, which contains the deployment configuration details for the Bank and Stock-Trading application.
* Install the Bank and Stock-Trading application. While installing the application, Helm uses the configuration details you provide in the `values.yaml` file.
* (Optional) Deploy Kiali and Jaeger in your Minikube cluster
@@ -29,7 +27,6 @@ This lab assumes you have:
* Lab 1: Prepare setup
* Lab 2: Set Up the Environment
* Lab 3: Integrate MicroTx Client Libraries with the Stock Broker Microservice
- * Lab 4: Provision Autonomous Databases for Use as Resource Manager
* Logged in using remote desktop URL as an `oracle` user. If you have connected to your instance as an `opc` user through an SSH terminal using auto-generated SSH Keys, then you must switch to the `oracle` user before proceeding with the next step.
```
@@ -42,15 +39,7 @@ This lab assumes you have:
Before you start a transaction, you must start a Minikube tunnel.
-1. Ensure that the minimum required memory and CPUs are available for Minikube.
-
- ```
-
- minikube config set memory 32768
-
- ```
-
-2. Start Minikube.
+1. Start Minikube.
```
@@ -76,7 +65,7 @@ Before you start a transaction, you must start a Minikube tunnel.
```
- From the output note down the value of `EXTERNAL-IP`, which is the external IP address of the Istio ingress gateway. You will provide this value in the next step.
+ From the output note down the value of `EXTERNAL-IP`, which is the external IP address of the Istio ingress gateway. You will provide this value in the next step. If the `EXTERNAL-IP` is in the `pending` state, ensure that the Minikube tunnel is running before proceeding with the next steps.
**Example output**
@@ -94,7 +83,261 @@ Before you start a transaction, you must start a Minikube tunnel.
Note that, if you don't do this, then you must explicitly specify the IP address in the commands when required.
-## Task 2: Configure Keycloak
+## Task 2: Know Details About the Resource Managers
+
+When you start Minikube, an instance of the Oracle Database 23c Free Release is deployed on Minikube. See [Oracle Database Free](https://www.oracle.com/database/free/get-started). The following three PDBs are already available in the Database instance.
+
+ * The Core Banking service uses `COREBNKPDB` as resource manager.
+ * The Branch Banking service uses `AZBRPDB1` as resource manager.
+ * The Stock Broker service uses `STOCKBROKERPDB` as resource manager.
+
+The required tables are already created in each PDB and are populated with sample values. This section provides details about the sample data in each table.
+
+### About the Resource Manager for the Core Banking Service
+
+The Core Banking service uses `COREBNKPDB` as resource manager. This PDB contains three tables: Branch, Account, and History. The following code snippet provides details about the tables. The sample code is provided only for your reference. The tables are already available in the PDB and populated with sample values.
+
+ ```SQL
+ CREATE TABLE BRANCH
+ (
+ BRANCH_ID NUMBER NOT NULL,
+ BRANCH_NAME VARCHAR2(20),
+ PHONE VARCHAR2(14),
+ ADDRESS VARCHAR2(60),
+ SERVICE_URL VARCHAR2(255),
+ LAST_ACCT INTEGER,
+ PRIMARY KEY (BRANCH_ID)
+ );
+
+ CREATE TABLE ACCOUNT
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ BRANCH_ID NUMBER NOT NULL,
+ SSN CHAR(12) NOT NULL,
+ FIRST_NAME VARCHAR2(20),
+ LAST_NAME VARCHAR2(20),
+ MID_NAME VARCHAR2(10),
+ PHONE VARCHAR2(14),
+ ADDRESS VARCHAR2(60),
+ PRIMARY KEY (ACCOUNT_ID)
+ );
+
+ CREATE TABLE HISTORY
+ (
+ TRANSACTION_CREATED TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ ACCOUNT_ID NUMBER NOT NULL,
+ BRANCH_ID NUMBER NOT NULL,
+ TRANSACTION_TYPE VARCHAR2(15) NOT NULL,
+ DESCRIPTION VARCHAR2(1024),
+ AMOUNT DECIMAL(20, 2) NOT NULL,
+ BALANCE DECIMAL(20, 2) NOT NULL
+ );
+ ```
+The following sample code provides details about the sample code that is available in the BRANCH and ACCOUNTS tables.
+
+ ```SQL
+ -- Sample values in the BRANCH table
+ INSERT INTO BRANCH (BRANCH_ID, BRANCH_NAME, PHONE, ADDRESS, SERVICE_URL, LAST_ACCT)
+ VALUES (1111, 'Arizona', '123-456-7891', '6001 N 24th St, Phoenix, Arizona 85016, United States', 'http://arizona-branch-bank:9095', 10002);
+
+ -- Sample values in the ACCOUNTS table
+ INSERT INTO ACCOUNT (ACCOUNT_ID, BRANCH_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10001, 1111, '873-61-1457', 'Adams', 'Lopez', 'D', '506-100-5886', '15311 Grove Ct. Arizona 95101');
+ INSERT INTO ACCOUNT (ACCOUNT_ID, BRANCH_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10002, 1111, '883-71-8538', 'Smith', 'Mason', 'N', '403-200-5890', '15322 Grove Ct. Arizona 95101');
+ INSERT INTO ACCOUNT (ACCOUNT_ID, BRANCH_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10003, 1111, '883-71-8538', 'Thomas', 'Dave', 'C', '603-700-5899', '15333 Grove Ct. Arizona 95101');
+ ```
+
+### About the Resource Manager for the Branch Banking Service
+
+The Branch Banking service uses `AZBRPDB1` as resource manager. This PDB contains the `SAVINGS_ACCOUNT` table. The following code snippet provides details about the `SAVINGS_ACCOUNT` table. The sample code is provided only for your reference. The tables are already available in the PDB and populated with sample values.
+
+ ```SQL
+ CREATE TABLE SAVINGS_ACCOUNT
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ BRANCH_ID NUMBER NOT NULL,
+ BALANCE DECIMAL(20, 2) NOT NULL,
+ PRIMARY KEY (ACCOUNT_ID)
+ );
+ ```
+
+The following sample code provides details about the sample code that is available in the SAVINGS_ACCOUNT table.
+
+ ```SQL
+ -- Branch - Arizona
+ INSERT INTO SAVINGS_ACCOUNT (ACCOUNT_ID, BRANCH_ID, BALANCE)
+ VALUES (10001, 1111, 50000.0);
+ INSERT INTO SAVINGS_ACCOUNT (ACCOUNT_ID, BRANCH_ID, BALANCE)
+ VALUES (10002, 1111, 50000.0);
+ INSERT INTO SAVINGS_ACCOUNT (ACCOUNT_ID, BRANCH_ID, BALANCE)
+ VALUES (10003, 1111, 50000.0);
+ ```
+
+### About the Resource Manager for the Stock Broker Service
+
+The Stock Broker service uses `STOCKBROKERPDB` as resource manager. This PDB contains six tables: CASH_ACCOUNT, STOCKS, USER_ACCOUNT, STOCK_BROKER_STOCKS, USER_STOCKS, and HISTORY. The following code snippet provides details about the tables. The sample code is provided only for your reference. The tables are already available in the PDB and populated with sample values.
+
+ ```SQL
+ -- Tables to be created
+ -- Display stock units
+ CREATE TABLE CASH_ACCOUNT
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ BALANCE DECIMAL,
+ STOCK_BROKER VARCHAR2(20) NOT NULL,
+ PRIMARY KEY (ACCOUNT_ID)
+ );
+ -- Common account for Stock Broker. This is inserted during the initialization of the application.
+ CREATE TABLE STOCKS
+ (
+ STOCK_SYMBOL VARCHAR2(6) NOT NULL,
+ COMPANY_NAME VARCHAR2(35) NOT NULL,
+ INDUSTRY VARCHAR2(35) NOT NULL,
+ STOCK_PRICE DECIMAL NOT NULL,
+ PRIMARY KEY (STOCK_SYMBOL)
+ );
+ CREATE TABLE USER_ACCOUNT
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ SSN CHAR(12) NOT NULL,
+ FIRST_NAME VARCHAR2(20),
+ LAST_NAME VARCHAR2(20),
+ MID_NAME VARCHAR2(10),
+ PHONE VARCHAR2(14),
+ ADDRESS VARCHAR2(60),
+ PRIMARY KEY (ACCOUNT_ID)
+ );
+ CREATE TABLE STOCK_BROKER_STOCKS
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ STOCK_SYMBOL VARCHAR2(6) NOT NULL,
+ STOCK_UNITS NUMBER NOT NULL,
+ PRIMARY KEY (ACCOUNT_ID, STOCK_SYMBOL),
+ CONSTRAINT FK_StockBroker_CashAccount
+ FOREIGN KEY (ACCOUNT_ID) REFERENCES CASH_ACCOUNT (ACCOUNT_ID) ON DELETE CASCADE,
+ CONSTRAINT FK_StockBrokerStocks_Stocks
+ FOREIGN KEY (STOCK_SYMBOL) REFERENCES STOCKS (STOCK_SYMBOL) ON DELETE CASCADE
+ );
+ CREATE TABLE USER_STOCKS
+ (
+ ACCOUNT_ID NUMBER NOT NULL,
+ STOCK_SYMBOL VARCHAR2(6) NOT NULL,
+ STOCK_UNITS NUMBER NOT NULL,
+ PRIMARY KEY (ACCOUNT_ID, STOCK_SYMBOL),
+ CONSTRAINT FK_UserStocks_UserAccount
+ FOREIGN KEY (ACCOUNT_ID) REFERENCES USER_ACCOUNT (ACCOUNT_ID) ON DELETE CASCADE,
+ CONSTRAINT FK_UserStocks_Stocks
+ FOREIGN KEY (STOCK_SYMBOL) REFERENCES STOCKS (STOCK_SYMBOL) ON DELETE CASCADE
+ );
+ CREATE TABLE HISTORY
+ (
+ TRANSACTION_TIME TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
+ ACCOUNT_ID NUMBER NOT NULL,
+ STOCK_OPERATION VARCHAR2(15) NOT NULL,
+ STOCK_UNITS NUMBER NOT NULL,
+ STOCK_SYMBOL VARCHAR2(6) NOT NULL,
+ DESCRIPTION VARCHAR2(1024)
+ );
+ ```
+The following sample code provides details about the sample code that is available in the CASH_ACCOUNT, STOCKS, USER_ACCOUNT, STOCK_BROKER_STOCKS, and USER_STOCKS table.
+
+ ```SQL
+ -- Sample value in the STOCKS table
+ INSERT INTO STOCKS(STOCK_SYMBOL, COMPANY_NAME, INDUSTRY, STOCK_PRICE)
+ VALUES ('BLUSC', 'Blue Semiconductor', 'Semiconductor Industry', 87.28);
+ INSERT INTO STOCKS(STOCK_SYMBOL, COMPANY_NAME, INDUSTRY, STOCK_PRICE)
+ VALUES ('SPRFD', 'Spruce Street Foods', 'Food Products', 152.55);
+ INSERT INTO STOCKS(STOCK_SYMBOL, COMPANY_NAME, INDUSTRY, STOCK_PRICE)
+ VALUES ('SVNCRP', 'Seven Corporation', 'Software consultants', 97.20);
+ INSERT INTO STOCKS(STOCK_SYMBOL, COMPANY_NAME, INDUSTRY, STOCK_PRICE)
+ VALUES ('TALLMF', 'Tall Manufacturers', 'Tall Manufacturing', 142.24);
+ INSERT INTO STOCKS(STOCK_SYMBOL, COMPANY_NAME, INDUSTRY, STOCK_PRICE)
+ VALUES ('VSNSYS', 'Vision Systems', 'Medical Equipments', 94.35);
+
+ -- Sample value in the CASH_ACCOUNT table
+ INSERT INTO CASH_ACCOUNT(ACCOUNT_ID, BALANCE, STOCK_BROKER)
+ VALUES (9999999, 10000000, 'PENNYPACK');
+
+ -- Sample value in the STOCK_BROKER_STOCKS table
+ INSERT INTO STOCK_BROKER_STOCKS (ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (9999999, 'BLUSC', 100000);
+ INSERT INTO STOCK_BROKER_STOCKS (ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (9999999, 'SPRFD', 50000);
+ INSERT INTO STOCK_BROKER_STOCKS (ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (9999999, 'SVNCRP', 90000);
+ INSERT INTO STOCK_BROKER_STOCKS (ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (9999999, 'TALLMF', 80000);
+ INSERT INTO STOCK_BROKER_STOCKS (ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (9999999, 'VSNSYS', 100000);
+
+ -- Sample value in the USER_ACCOUNT table
+ INSERT INTO USER_ACCOUNT (ACCOUNT_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10001, '873-61-1457', 'Adams', 'Lopez', 'D', '506-100-5886', '15311 Grove Ct. New York 95101');
+ INSERT INTO USER_ACCOUNT (ACCOUNT_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10002, '883-71-8538', 'Smith', 'Mason', 'N', '403-200-5890', '15311 Grove Ct. New York 95101');
+ INSERT INTO USER_ACCOUNT (ACCOUNT_ID, SSN, FIRST_NAME, LAST_NAME, MID_NAME, PHONE, ADDRESS)
+ VALUES (10003, '993-71-8500', 'Thomas', 'Dave', 'C', '603-700-5899', '15333 Grove Ct. Arizona 95101');
+
+ -- Sample value in the USER_STOCKS table
+ INSERT INTO USER_STOCKS(ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (10001, 'BLUSC', 10);
+ INSERT INTO USER_STOCKS(ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (10001, 'SPRFD', 15);
+ INSERT INTO USER_STOCKS(ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (10001, 'SVNCRP', 20);
+ INSERT INTO USER_STOCKS(ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (10001, 'TALLMF', 30);
+ INSERT INTO USER_STOCKS(ACCOUNT_ID, STOCK_SYMBOL, STOCK_UNITS)
+ VALUES (10001, 'VSNSYS', 40);
+ ```
+
+When you start Minikube, the PDBs are created and populated with sample data.
+
+## Task 3: Verify that All the Resources are Ready
+
+1. Verify that the application has been deployed successfully.
+
+ ```text
+
+ helm list -n otmm
+
+ ```
+
+ In the output, verify that the `STATUS` of the `bankapp` is `deployed`.
+
+ **Example output**
+
+ ![Helm install success](./images/app-deployed.png)
+
+2. Verify that all resources, such as pods and services, are ready. Run the following command to retrieve the list of resources in the namespace `otmm` and their status.
+
+ ```text
+
+ kubectl get pods -n otmm
+
+ ```
+
+ **Example output**
+
+ ![Status of pods in the otmm namespace](./images/get-pods-status.png)
+
+3. Verify that the database instance is running. The database instance is available in the `oracledb` namespace. Run the following command to retrieve the list of resources in the `oracledb` namespace and their status.
+
+ ```text
+
+ kubectl get pods -n oracledb
+
+ ```
+
+ **Example output**
+
+ ![Database instance details](./images/database-service.png)
+
+It usually takes some time for the Database services to start running in the Minikube environment. Proceed with the remaining tasks only after ensuring that all the resources, including the database service, are ready and in the `RUNNING` status and the value of the **READY** field is `1/1`.
+
+## Task 4: Configure Keycloak
The Bank and Stock-Trading Application console uses Keycloak to authenticate users.
@@ -106,23 +349,32 @@ The Bank and Stock-Trading Application console uses Keycloak to authenticate use
```
- From the output note down the value of `EXTERNAL-IP` and `PORT(S)`, which is the external IP address and port of Keycloak. You will provide this value in the next step.
+ From the output note down the value of `EXTERNAL-IP` and `PORT(S)`, which is the external IP address and port of Keycloak. You will provide this value later.
**Example output**
![Public IP address of Keycloak](./images/keycloak-ip-address.png)
- Let's consider that the external IP in the above example is 198.51.100.1 and the IP address is 8080.
+ Let's consider that the external IP in the above example is 198.51.100.1 and the port is 8080.
-2. Sign in to Keycloak. In a browser, enter the IP address and port number that you have copied in the previous step. The following example provides sample values. Provide the values based on your environment.
+2. Run the following command to run the `reconfigure-keycloak.sh` script from the `$HOME` directory. This command configures Keycloak and updates the settings to suit the requirements of the application.
+
+ ```
+
+ cd $HOME
+ sh reconfigure-keycloak.sh
+
+ ```
+
+3. Sign in to Keycloak. In a browser, enter the IP address and port number that you have copied in the previous step. The following example provides sample values. Provide the values based on your environment.
```
http://198.51.100.1:8080
```
-3. Click **Administration Console**.
+4. Click **Administration Console**.
-4. Sign in to Keycloak with the initial administrator username `admin` and password `admin`. After logging in, reset the password for the `admin` user. For information about resetting the password, see the Keycloak documentation.
+5. Sign in to Keycloak with the initial administrator username `admin` and password `admin`. After logging in, reset the password for the `admin` user. For information about resetting the password, see the Keycloak documentation.
6. Select the **MicroTx-BankApp** realm, and then click **Users** to view the list of users in the `MicroTx-BankApp` realm. The `MicroTx-BankApp` realm is preconfigured with these default user names.
![Dialog box to view the list of Users](./images/keycloak-users.png)
@@ -134,22 +386,18 @@ The Bank and Stock-Trading Application console uses Keycloak to authenticate use
Details of the `microtx-bankapp` client are displayed.
-9. In the **Settings** tab, under **Access settings**, enter the external IP address of Istio ingress gateway for the **Root URL**, **Valid redirect URIs**, **Valid post logout redirect URIs**, and **Admin URL** fields. Provide the IP address of Istio ingress gateway that you have copied earlier.
- ![Access Settings group in the Settings tab](./images/keycloak-client-ip.png)
-
-10. Click **Save**.
+9. In the **Settings** tab, under **Access settings**, verify that the external IP address of Istio ingress gateway is available in the **Root URL**, **Valid redirect URIs**, **Valid post logout redirect URIs**, and **Admin URL** fields.
-11. Click the **Credentials** tab, and then note down the value of the **Client-secret**. You'll need to provide this value later.
+10. Click the **Credentials** tab, and then note down the value of the **Client-secret**. You'll need to provide this value later.
![Access Settings group in the Settings tab](./images/keycloak-client-secret.png)
-12. Click **Realm settings**, and then in the **Frontend URL** field of the **General** tab, enter the external IP address and port of the Keycloak server which you have copied in a previous step. For example, `http://198.51.100.1:8080`.
- ![General Realm Settings](./images/keycloak-url.png)
+11. Click **Realm settings**, and then in the **Frontend URL** field of the **General** tab, verify that the values provided match the external IP address and port of the Keycloak server which you have copied in a previous step. For example, `http://198.51.100.1:8080`.
-13. In the **Endpoints** field, click the **OpenID Endpoint Configuration** link. Configuration details are displayed in a new tab.
+12. In the **Endpoints** field, click the **OpenID Endpoint Configuration** link. Configuration details are displayed in a new tab.
-14. Note down the value of the **issuer** URL. It is in the format, `http://:/realms/`. For example, `http://198.51.100.1:8080/realms/MicroTx-Bankapp`. You'll need to provide this value later.
+13. Note down the value of the **issuer** URL. It is in the format, `http://:/realms/`. For example, `http://198.51.100.1:8080/realms/MicroTx-Bankapp`. You'll need to provide this value later.
-15. Click **Save**.
+14. Click **Save**.
## Task 3: Provide Access Details in the values.yaml File
@@ -157,7 +405,7 @@ The folder that contains the Bank and Stock-Trading application code also contai
To provide the configuration and environment details in the `values.yaml` file:
-1. Open the `values.yaml` file, which is in the `/home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/Helmcharts` folder.
+1. Open the `values.yaml` file, which is in the `/home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp/Helmcharts` folder.
2. Enter values that you have noted down for the following fields under `security` in `UserBanking`.
@@ -169,52 +417,17 @@ To provide the configuration and environment details in the `values.yaml` file:
4. Save the changes you have made to the `values.yaml` file.
-## Task 4: Build Container Images for Each Microservice
-
-The code for the Bank and Stock-Trading application is available in the installation bundle in the `/home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp` folder. The container image for the User Banking service is pre-built and available for your use. Build container images for all the other microservices in the Bank and Stock-Trading application.
-
-To build container images for each microservice in the sample:
-
-1. Run the following commands to build the container image for the Branch Banking service.
-
- ```
-
- cd /home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/BranchBanking
-
- ```
-
- ```
-
- minikube image build -t branch-banking:1.0 .
- ```
-
- When the image is successfully built, the following message is displayed.
-
- **Successfully tagged branch-banking:1.0**
+## Task 4: Build Container Image for the Stock Broker service
-2. Run the following commands to build the container image for the Core Banking service.
-
- ```
-
- cd /home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/CoreBanking
-
- ```
-
- ```
-
- minikube image build -t core-banking:1.0 .
-
- ```
-
- When the image is successfully built, the following message is displayed.
+The code for the Bank and Stock-Trading application is available in the installation bundle in the `/home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp` folder. The container image for the User Banking, Branch Banking, Core Banking services are pre-built and available for your use. Build the container image only for the Stock Broker service.
- **Successfully tagged core-banking:1.0**
+To build container image for the Stock Broker service:
-3. Run the following commands to build the Docker image for the Stock Broker service.
+1. Run the following commands.
```
- cd /home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/StockBroker
+ cd /home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp/StockBroker
```
@@ -228,7 +441,7 @@ To build container images for each microservice in the sample:
**Successfully tagged stockbroker:1.0**
-The container images that you have created are available in your Minikube container registry.
+The container image that you have built is available in your Minikube container registry.
## Task 5: Install the Bank and Stock-Trading application
@@ -238,7 +451,7 @@ Install the Bank and Stock-Trading application in the `otmm` namespace, where yo
```
- cd /home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/Helmcharts
+ cd /home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp/Helmcharts
```
@@ -256,7 +469,7 @@ Install the Bank and Stock-Trading application in the `otmm` namespace, where yo
```
NAME: bankapp
- LAST DEPLOYED: TUe May 23 10:52:14 2023
+ LAST DEPLOYED: Tue May 23 10:52:14 2023
NAMESPACE: otmm
STATUS: deployed
REVISION: 1
@@ -338,10 +551,10 @@ You may now **proceed to the next lab**.
## Learn More
-* [Develop Applications with XA](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/develop-xa-applications.html#GUID-D9681E76-3F37-4AC0-8914-F27B030A93F5)
+* [Develop Applications with XA](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/develop-xa-applications.html#GUID-D9681E76-3F37-4AC0-8914-F27B030A93F5)
## Acknowledgements
* **Author** - Sylaja Kannan
* **Contributors** - Brijesh Kumar Deo and Bharath MC
-* **Last Updated By/Date** - Sylaja, June 2023
+* **Last Updated By/Date** - Sylaja, November 2023
diff --git a/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/app-deployed.png b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/app-deployed.png
new file mode 100644
index 000000000..8a4aa1be5
Binary files /dev/null and b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/app-deployed.png differ
diff --git a/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/database-service.png b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/database-service.png
new file mode 100644
index 000000000..9ebe89376
Binary files /dev/null and b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/database-service.png differ
diff --git a/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/get-pods-status.png b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/get-pods-status.png
new file mode 100644
index 000000000..632601867
Binary files /dev/null and b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/get-pods-status.png differ
diff --git a/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/ingress-gateway-ip-address.png b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/ingress-gateway-ip-address.png
index 5f92f8a56..1c0d69eb1 100644
Binary files a/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/ingress-gateway-ip-address.png and b/microtx-xa-stock-broker-app/deploy-stock-trading-app/images/ingress-gateway-ip-address.png differ
diff --git a/microtx-xa-stock-broker-app/integrate-microtx-lib-files/integrate-microtx-lib-files.md b/microtx-xa-stock-broker-app/integrate-microtx-lib-files/integrate-microtx-lib-files.md
index d4d8d50f5..46e591a3b 100644
--- a/microtx-xa-stock-broker-app/integrate-microtx-lib-files/integrate-microtx-lib-files.md
+++ b/microtx-xa-stock-broker-app/integrate-microtx-lib-files/integrate-microtx-lib-files.md
@@ -12,8 +12,8 @@ Estimated Time: 5 minutes
In this lab, you will:
-* Configure the Stock Broker service as a Transaction initiator. A transaction initiator service starts and ends a transaction.
-* Configure the Stock Broker service as a Transaction participant. A transaction participant service joins the transaction. The Stock Broker service initiates the transaction, and then participates in it. After starting a transaction to buy or sell shares, the Stock Broker service also participates in the transaction to deposit or withdraw the shares from a user's account.
+* Configure the Stock Broker service as a transaction initiator. A transaction initiator service starts and ends a transaction.
+* Configure the Stock Broker service as a transaction participant. A transaction participant service joins the transaction. The Stock Broker service initiates the transaction, and then participates in it. After starting a transaction to buy or sell shares, the Stock Broker service also participates in the transaction to deposit or withdraw the shares from a user's account.
### Prerequisites
@@ -39,42 +39,44 @@ This lab assumes you have:
Uncomment all the lines of code in the following files to integrate the functionality provided by the MicroTx client libraries with the Stock Broker application.
-* `pom.xml` file located in the `/home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/StockBroker/` folder
+* `pom.xml` file located in the `/home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp/StockBroker/` folder
* `UserStockTransactionServiceImpl.java` file located in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application
The following section provides reference information about each line of code that you must uncomment and its purpose. You can skip this reading this section if you only want to quickly uncomment the code and run the application. You can return to this section later to understand the purpose of each line of code that you uncomment.
-1. Include the MicroTx library as a maven dependency in the application's `pom.xml` file. Open the `pom.xml` file which is in the `/home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/StockBroker/` folder in any code editor, and then uncomment the following lines of code. The following sample code is for the 22.3.2 release. Provide the correct version, based on the release that you want to use.
+1. Include the MicroTx library as a maven dependency in the application's `pom.xml` file. Open the `pom.xml` file which is in the `/home/oracle/OTMM/otmm-23.4.1/samples/xa/java/bankapp/StockBroker/` folder in any code editor, and then uncomment the following lines of code. The following sample code is for the 23.4.1 release. Provide the correct version, based on the release that you want to use.
```
com.oracle.tmm.jta
- TmmLib
- 22.3.2
+ microtx-spring-boot-starter
+ 23.4.1
```
2. Open the `UserStockTransactionServiceImpl.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
-3. Uncomment the following line of code to import the `oracle.tmm.jta.TrmUserTransaction` package.
+3. Uncomment the following lines of code to import the required packages.
**Sample command**
```java
- import oracle.tmm.jta.TrmUserTransaction;
+ import jakarta.transaction.*;
+ import com.oracle.microtx.xa.rm.MicroTxUserTransactionService;
```
-4. Uncomment the following line of code to initialize an object of the `TrmUserTransaction` class in the application code for every new transaction. This object demarcates the transaction boundaries, which are begin, commit, or roll back. In your application code you must create this object before you begin a transaction.
+4. Uncomment the following lines of code to initialize an object of the `MicroTxUserTransactionService` class in the application code for every new transaction. This object demarcates the transaction boundaries, which are begin, commit, or roll back. Autowire the `MicroTxUserTransactionService` class before your application logic initiates or begins a transaction.
**Sample command**
```java
- TrmUserTransaction transaction = new TrmUserTransaction();
+ @Autowired
+ MicroTxUserTransactionService microTxUserTransaction;
```
@@ -84,7 +86,7 @@ The following section provides reference information about each line of code tha
```java
- transaction.begin(true);
+ microTxUserTransaction.begin(true);
```
@@ -94,18 +96,19 @@ The following section provides reference information about each line of code tha
```java
- transaction.rollback();
- transaction.commit();
+ microTxUserTransaction.rollback();
+ microTxUserTransaction.commit();
```
-7. Uncomment the following line of code under `sell()` to create an instance of the `TrmUserTransaction` object to sell stocks.
+7. Uncomment the following line of code under `sell()` to create an instance of the `MicroTxUserTransactionService` object to sell stocks. Autowire the `MicroTxUserTransactionService` class before your application logic initiates or begins a transaction.
**Sample command**
```java
- TrmUserTransaction transaction = new TrmUserTransaction();
+ @Autowired
+ MicroTxUserTransactionService microTxUserTransaction;
```
@@ -115,7 +118,7 @@ The following section provides reference information about each line of code tha
```java
- transaction.begin(true);
+ microTxUserTransaction.begin(true);
```
@@ -125,11 +128,25 @@ The following section provides reference information about each line of code tha
```java
- transaction.rollback();
- transaction.commit();
+ microTxUserTransaction.rollback();
+ microTxUserTransaction.commit();
```
+10. Uncomment the catch blocks in the `buy()` and `sell()` methods.
+
+11. Uncomment the following lines of code in the `BankUtility.java` file, located in the `/com/oracle/tmm/stockbroker/utils/` package of the `StockBroker` application, to inject the Spring Boot REST template provided by MicroTx.
+
+ **Sample command**
+
+ ```java
+
+ @Autowired
+ @Qualifier("MicroTxXaRestTemplate")
+ RestTemplate restTemplate;
+
+ ```
+
## Task 2: Configure the Stock Broker Application as a Transaction Participant
Since the Stock broker application participates in the transaction in addition to initiating the transaction, you must make additional configurations for the application to participate in the transaction and communicate with its resource manager.
@@ -137,13 +154,12 @@ Since the Stock broker application participates in the transaction in addition t
When you integrate the MicroTx client library for Java with the Stock broker application, the library performs the following functions:
* Enlists the participant service with the transaction coordinator.
-* Injects an `XADataSource` object for the participant application code to use through dependency injection. The MicroTx libraries automatically inject the configured data source into the participant services, so you must add the `@Inject` or `@Context` annotation to the application code. The application code runs the DML using this connection.
+* Injects an `XADataSource` object for the participant application code to use through dependency injection. The MicroTx libraries automatically inject the configured data source into the participant services, so you must autowire the connection object with `microTxSqlConnection` bean qualifier. The application code runs the DML using this connection.
* Calls the resource manager to perform operations.
Uncomment all the lines of code in the following files:
* `DatasourceConfigurations.java` file located in the `/com/oracle/tmm/stockbroker` package of the `StockBroker` application.
-* `TMMConfigurations.java` file located in the `/com/oracle/tmm/stockbroker` package of the `StockBroker` application.
* `AccountServiceImpl.java` file located in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
* `StockBrokerTransactionServiceImpl.java` file located in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
@@ -153,6 +169,14 @@ To configure the Stock Broker application as a transaction participant:
1. Open the `DatasourceConfigurations.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker` package of the `StockBroker` application.
+2. Uncomment the following line of code to import the `com.oracle.microtx.common.MicroTxConfig` package.
+
+ ```java
+
+ import com.oracle.microtx.common.MicroTxConfig;
+
+ ```
+
2. Uncomment the following lines of code in the transaction participant function or block to create a `PoolXADataSource` object and provide credentials and other details to connect to the resource manager. This object is used by the MicroTx client library.
```java
@@ -170,6 +194,8 @@ To configure the Stock Broker application as a transaction participant:
xapds.setMinPoolSize(Integer.valueOf(minPoolSize));
xapds.setInitialPoolSize(Integer.valueOf(initialPoolSize));
xapds.setMaxPoolSize(Integer.valueOf(maxPoolSize));
+ //Initialize the XA data source object
+ MicroTxConfig.initXaDataSource(xapds);
} catch (SQLException ea) {
log.severe("Error connecting to the database: " + ea.getMessage());
}
@@ -179,204 +205,48 @@ To configure the Stock Broker application as a transaction participant:
```
- It is your responsibility as an application developer to ensure that an XA-compliant JDBC driver and required parameters are set up while creating the `PoolXADataSource` object.
+ It is your responsibility as an application developer to ensure that an XA-compliant JDBC driver and required parameters are set up while creating the `PoolXADataSource` object. The MicroTx client library uses the `XADatasource` object to create database connections.
-3. Open the `TMMConfigurations.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker` package of the `StockBroker` application.
-
-4. Uncomment the following lines of code to import the following packages.
-
- ```java
-
- import oracle.tmm.common.TrmConfig;
- import oracle.tmm.jta.XAResourceCallbacks;
- import oracle.tmm.jta.common.TrmConnectionFactory;
- import oracle.tmm.jta.common.TrmSQLConnection;
- import oracle.tmm.jta.common.TrmXAConnection;
- import oracle.tmm.jta.common.TrmXAConnectionFactory;
- import oracle.tmm.jta.common.TrmXASQLStatementFactory;
- import oracle.tmm.jta.filter.TrmTransactionRequestFilter;
- import oracle.tmm.jta.filter.TrmTransactionResponseFilter;
- import oracle.ucp.jdbc.PoolXADataSource;
- import org.glassfish.jersey.internal.inject.AbstractBinder;
- import org.springframework.beans.factory.annotation.Autowired;
- import org.springframework.context.annotation.Bean;
- import org.springframework.context.annotation.Lazy;
- import org.springframework.web.context.annotation.RequestScope;
- import javax.sql.XAConnection;
- import java.sql.Connection;
- import java.sql.Statement;
-
- ```
+10. Open the `AccountServiceImpl.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
-4. Uncomment the following lines of code to create a `PoolXADatasource` object. `PoolXADatasource` is an interface defined in JTA whose implementation is provided by the JDBC driver. The MicroTx client library uses this object to connect to database to start XA transactions and perform various operations such as prepare, commit, and rollback. The MicroTx library also provides a SQL connection object to the application code to execute DML using dependency injection.
+12. Uncomment the following lines of code so that the application uses the connection passed by the MicroTx client library. The following code in the participant application autowires the connection object bean `microTxSqlConnection` that is managed by the MicroTx client library.
```java
@Autowired
- private PoolXADataSource poolXADataSource;
-
- ```
-
-5. Register the listeners, XA resource callback, filters for MicroTx libraries, and MicroTx XA connection bindings.
-
- ```java
-
- //Register the MicroTx XA Resource callback that coordinates with the transaction coordinator
- register(XAResourceCallbacks.class);
-
- // filters for the MicroTx libraries that intercept the JAX-RS calls and manages the XA Transactions
- register(TrmTransactionRequestFilter.class);
- register(TrmTransactionResponseFilter.class);
-
- // MicroTx XA connection Bindings
- register(new AbstractBinder() {
- @Override
- protected void configure() {
- bindFactory(TrmConnectionFactory.class).to(Connection.class);
- bindFactory(TrmXASQLStatementFactory.class).to(Statement.class);
- }
- });
-
- ```
-
-6. Uncomment the following line of code in the `init()` method to initialize an XA data source object.
- ```java
-
- initializeOracleXADataSource();
-
- ```
-
-7. Uncomment the following line of code to call the XA data source object that you have initialized.
-
- ```java
-
- private void initializeOracleXADataSource() {
- TrmConfig.initXaDataSource(this.poolXADataSource);
- }
-
- ```
-
-8. Initialize a Bean for the `TrmSQLConnection` object and `TrmXAConnection` object.
-
- ```java
-
- // Register the MicroTx TrmSQLConnection object bean
- @Bean
- @TrmSQLConnection
+ @Qualifier("microTxSqlConnection")
@Lazy
- @RequestScope
- public Connection tmmSqlConnectionBean(){
- return new TrmConnectionFactory().get();
- }
-
- // Register the MicroTx TrmXaConnection object bean
- @Bean
- @TrmXAConnection
- @Lazy
- @RequestScope
- public XAConnection tmmSqlXaConnectionBean(){
- return new TrmXAConnectionFactory().get();
- }
-
- ```
-
-9. Save the changes.
-
-10. Open the `AccountServiceImpl.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
-
-11. Uncomment the following lines of code to import the required packages.
-
- ```java
-
- import javax.inject.Inject;
- import oracle.tmm.jta.common.TrmSQLConnection;
-
- ```
-
-12. Uncomment the following lines of code so that the application uses the connection passed by the MicroTx client library. The following code in the participant application injects the `connection` object that is created by the MicroTx client library.
-
- ```java
-
- @Inject
- @TrmSQLConnection
private Connection connection;
```
-13. Delete all the occurrences of the following line of code as the connection is managed by the MicroTx client library.
-
- ```java
-
- Connection connection = poolDataSource.getConnection();
-
- ```
-
14. Save the changes.
15. Open the `StockBrokerTransactionServiceImpl.java` file in any code editor. This file is in the `/com/oracle/tmm/stockbroker/service/impl/` package of the `StockBroker` application.
-16. Uncomment the following lines of code to import the required packages.
+17. Uncomment the following lines of code so that the application uses the connection passed by the MicroTx client library. The following code in the participant application autowires the connection object bean `microTxSqlConnection` that is managed by the MicroTx client library.
```java
- import javax.inject.Inject;
- import oracle.tmm.jta.common.TrmSQLConnection;
-
- ```
-
-17. Uncomment the following lines of code so that the application uses the connection passed by the MicroTx client library. The following code in the participant application injects the `connection` object that is created by the MicroTx client library.
-
- ```java
-
- @Inject
- @TrmSQLConnection
+ @Autowired
+ @Qualifier("microTxSqlConnection")
+ @Lazy
private Connection connection;
```
-18. Delete all the occurrences of the following line of code as the connection is managed by the MicroTx client library.
-
- ```java
-
- Connection connection = poolDataSource.getConnection();
-
- ```
-
19. Save the changes.
## Task 3: Enable Transaction History (Optional)
You can register your initiator and participant services to receive notifications when an event occurs. To achieve this you must perform the additional steps described in this task.
-1. Uncomment the `BuyStockEventListenerResource.java` and `SellStockEventListenerResource.java` classes, located in the `/com/oracle/tmm/stockbroker/listeners/` package of the `StockBroker` application. The `StockBroker` application files are available in the `/home/oracle/microtx/otmm-22.3.2/samples/xa/java/bankapp/StockBroker/` folder.
-
-2. Uncomment the `TransactionEventsUtility.java` class, located in the `/com/oracle/tmm/stockbroker/utils/` package of the `StockBroker` application.
-
-3. Update the `TMMConfigurations.java` file, located in the `/com/oracle/tmm/stockbroker` package of the `StockBroker` application.
-
- 1. Add the following lines of code to import the listeners that you have uncommented.
-
- ```java
-
- import com.oracle.tmm.stockbroker.listeners.BuyStockEventListenerResource;
- import com.oracle.tmm.stockbroker.listeners.SellStockEventListenerResource;
-
- ```
-
- 2. Add the following lines of code within the `TMMConfigurations()` method to register the `BuyStockEventListenerResource.java` and `SellStockEventListenerResource.java` classes.
-
- ```java
-
- ...
- register(BuyStockEventListenerResource.class);
- register(SellStockEventListenerResource.class);
- ...
-
- ```
+1. Uncomment the `TransactionEventsUtility.java` class, located in the `/com/oracle/tmm/stockbroker/utils/` package of the `StockBroker` application.
+The `TransactionEventsUtility.java` class registers the events and you can use the `BuyStockEventListenerResource.java` and `SellStockEventListenerResource.java` classes to listen to the transaction events.
-4. Update the `UserStockTransactionServiceImpl.java` class, located in the `/com/oracle/tmm/stockbroker/service/impl` package of the `StockBroker` application. Add the following lines of code to register the transaction events within the transaction boundary. Note that you must register the transaction event after the transaction begins.
+2. Update the `UserStockTransactionServiceImpl.java` class, located in the `/com/oracle/tmm/stockbroker/service/impl` package of the `StockBroker` application. Add the following lines of code to register the transaction events within the transaction boundary. Note that you must register the transaction event after the transaction begins.
- 1. Add the following line of code to import the `TransactionEventsUtility` package.
+ 1. Add the following lines of code to import the required packages.
```java
@@ -402,10 +272,10 @@ You can register your initiator and participant services to receive notification
TrmUserTransaction transaction = new TrmUserTransaction();
BuyResponse buyResponse = new BuyResponse();
try {
- transaction.begin(true);
+ microTxUserTransaction.begin(true);
// Add the following line of code after the transaction begins.
transactionEventsUtility.registerStockTransactionEvents(buyStock);
- buyResponse.setTransactionId(transaction.getTransactionID());
+ buyResponse.setTransactionId(microTxUserTransaction.getTransactionID());
...
}
@@ -420,10 +290,10 @@ You can register your initiator and participant services to receive notification
TrmUserTransaction transaction = new TrmUserTransaction();
SellResponse sellResponse = new SellResponse();
try {
- transaction.begin(true);
+ microTxUserTransaction.begin(true);
// Add the following line of code after the transaction begins.
transactionEventsUtility.registerStockTransactionEvents(sellStock);
- sellResponse.setTransactionId(transaction.getTransactionID());
+ sellResponse.setTransactionId(microTxUserTransaction.getTransactionID());
...
}
diff --git a/microtx-xa-stock-broker-app/introduction/images/stock_broker_xa_app.png b/microtx-xa-stock-broker-app/introduction/images/stock_broker_xa_app.png
index d7f19e45e..9a268a9a5 100644
Binary files a/microtx-xa-stock-broker-app/introduction/images/stock_broker_xa_app.png and b/microtx-xa-stock-broker-app/introduction/images/stock_broker_xa_app.png differ
diff --git a/microtx-xa-stock-broker-app/introduction/introduction.md b/microtx-xa-stock-broker-app/introduction/introduction.md
index 177206e53..9d37ada43 100644
--- a/microtx-xa-stock-broker-app/introduction/introduction.md
+++ b/microtx-xa-stock-broker-app/introduction/introduction.md
@@ -6,7 +6,7 @@ As organizations rush to adopt microservices architecture, they often run into p
In this workshop, you will learn how to use MicroTx to maintain data consistency across several microservices by deploying and running a Bank and Stock-Trading application. This application contains several microservices and it uses distributed, two-phase commit transaction (XA). It is very simple to use MicroTx. After installing MicroTx, you only need to integrate the MicroTx libraries with your application code to manage transactions. In this workshop, you will learn how you can integrate the MicroTx client libraries with the Bank and Stock-Trading application. During the transaction, each microservice also makes updates to a resource manager to track the change in the amount and stocks. When you run the Bank and Stock-Trading application, you will be able to see how MicroTx ensures consistency of transactions across the distributed microservices and their resource managers. You will also integrate MicroTx with the Kubernetes ecosystem by using tools, such as Kiali and Jaeger, to visualize the flow of requests between MicroTx and the microservices.
-### About the Bank and Stock-Trading application
+### About the Bank and Stock-Trading Application
The Bank and Stock-Trading application demonstrates how you can develop microservices that participate in a distributed transaction while using MicroTx to coordinate the requests. You can use the application to withdraw or deposit an amount, as well as buy and sell stocks. Since financial applications that move funds require strong global consistency, the application uses XA transaction protocol.
@@ -14,24 +14,23 @@ When a user purchases stocks using the Stock Broker service, the application wit
Participant microservices must use the MicroTx client libraries which registers callbacks and provides implementation of the callbacks for the resource manager. As shown in the following image, MicroTx communicates with the resource managers to commit or roll back the transaction. MicroTx connects with each resource manager involved in the transaction to prepare, commit, or rollback the transaction. The participant service provides the credentials to the coordinator to access the resource manager.
-The following figure shows the various microservices in the Bank and Stock-Trading application. Some microservices connect to an Autonomous Transaction Processing Serverless (ATP-S) instance or resource manager. Resource managers manage stateful resources such as databases, queuing or messaging systems, and caches.
+The following figure shows the various microservices in the Bank and Stock-Trading application. Some microservices connect to a resource manager. Resource managers manage stateful resources such as databases, queuing or messaging systems, and caches.
![Microservices in Bank and Stock-Trading application](./images/stock_broker_xa_app.png)
* The MicroTx coordinator manages transactions amongst the participant services.
-* The Stock Broker microservice initiates the transactions, so it is called a transaction initiator service. The user interacts with this microservice to buy and sell shares. When a new request is created, the helper method that is exposed in the MicroTx library runs the begin() method to start the transaction. This microservice also contains the business logic to issue the commit and roll back calls. After initiating the transaction, the Stock Broker service also participates in the transaction. In this lab, you will learn to configure the Stock Broker microservice as an initiator and as a participant service. It uses resources from the Stock Broker Service ATP instance.
+* The Stock Broker microservice initiates the transactions, so it is called a transaction initiator service. The user interacts with this microservice to buy and sell shares. When a new request is created, the helper method that is exposed in the MicroTx library runs the `begin()` method to start the transaction. This microservice also contains the business logic to issue the commit and roll back calls. After initiating the transaction, the Stock Broker service also participates in the transaction. After starting a transaction to buy or sell shares, the Stock Broker service also participates in the transaction to deposit or withdraw the shares from a user's account. In this lab, you will learn to configure the Stock Broker microservice as an initiator and as a participant service.
-* The Core Banking, Branch Banking, and User Banking services participate in the transactions related to the trade in stocks, so they are called participant services. They do not initiate the transaction to buy or sell stocks. The MicroTx library includes headers that enable the participant services to automatically enlist in the transaction. These microservices expose REST APIs to get the account balance and to withdraw or deposit money from a specified account. Core Banking and Branch Banking services also use resources from the Banking Service ATP instance. The MicroTx client library files are already integrated with the Core Banking, Branch Banking, and User Banking services.
+* The Core Banking, Branch Banking, and User Banking services participate in the transactions related to the trade in stocks, so they are called participant services. They do not initiate the transaction to buy or sell stocks. The MicroTx library includes headers that enable the participant services to automatically enlist in the transaction. These microservices expose REST APIs to get the account balance and to withdraw or deposit money from a specified account. The MicroTx client library files are already integrated with the Core Banking, Branch Banking, and User Banking services.
The service must meet ACID requirements, so withdraw amount, transfer amount, deposit stocks, sell stocks, debit amount, or credit amount are called in the context of an XA transaction.
-Estimated Workshop Time: 1 hours 30 minutes
+Estimated Workshop Time: 62 minutes
### Objectives
In this workshop, you will learn how to:
-* Provision Oracle Autonomous Database instances and use them as resource managers for microservices.
* Configure the required properties so that MicroTx can connect to the resource manager and microservices.
* Include the MicroTx client libraries in your application to configure your Java application as a transaction initiator service. A transaction initiator service starts and ends a transaction.
* Include the MicroTx client libraries in your application to configure your Java application as a transaction participant. A transaction participant service only joins the transaction. They do not initiate a transaction.
@@ -42,16 +41,17 @@ In this workshop, you will learn how to:
This lab assumes you have:
- An Oracle Cloud account
+- At least 4 OCPUs, 24 GB memory, and 128 GB of bootable storage volume is available in your Oracle Cloud Infrastructure tenancy to run the Bank and Stock-Trading application.
Let's begin! If you need to create an Oracle Cloud account, click **Get Started** in the **Contents** menu on the left. Otherwise, if you have an existing account, click **Lab 1**.
## Learn More
-* [Oracle® Transaction Manager for Microservices Developer Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/index.html)
-* [Oracle® Transaction Manager for Microservices Quick Start Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmqs/index.html)
+* [Oracle® Transaction Manager for Microservices Developer Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/index.html)
+* [Oracle® Transaction Manager for Microservices Quick Start Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmqs/index.html)
## Acknowledgements
-* **Author** - Sylaja Kannan, Principal User Assistance Developer
-* **Contributors** - Brijesh Kumar Deo
-* **Last Updated By/Date** - Sylaja Kannan, July 2023
+* **Author** - Sylaja Kannan, Consulting User Assistance Developer
+* **Contributors** - Brijesh Kumar Deo and Bharath MC
+* **Last Updated By/Date** - Sylaja Kannan, November 2023
diff --git a/microtx-xa-stock-broker-app/run-xa-app/images/stock-broker-xa-app.png b/microtx-xa-stock-broker-app/run-xa-app/images/stock-broker-xa-app.png
index d7f19e45e..9a268a9a5 100644
Binary files a/microtx-xa-stock-broker-app/run-xa-app/images/stock-broker-xa-app.png and b/microtx-xa-stock-broker-app/run-xa-app/images/stock-broker-xa-app.png differ
diff --git a/microtx-xa-stock-broker-app/run-xa-app/run-xa-app.md b/microtx-xa-stock-broker-app/run-xa-app/run-xa-app.md
index 54dee4008..a7391cf7c 100644
--- a/microtx-xa-stock-broker-app/run-xa-app/run-xa-app.md
+++ b/microtx-xa-stock-broker-app/run-xa-app/run-xa-app.md
@@ -25,8 +25,7 @@ This lab assumes you have:
* Lab 1: Prepare setup
* Lab 2: Set Up the Environment
* Lab 3: Integrate MicroTx Client Libraries with the Stock Broker Microservice
- * Lab 4: Provision Autonomous Databases for Use as Resource Manager
- * Lab 5: Deploy the Bank and Stock-Trading Application
+ * Lab 4: Deploy the Bank and Stock-Trading Application
* Logged in using remote desktop URL as an `oracle` user. If you have connected to your instance as an `opc` user through an SSH terminal using auto-generated SSH Keys, then you must switch to the `oracle` user before proceeding with the next step.
```
@@ -122,11 +121,11 @@ When you send a request to sell stocks, the Stock Broker service sells the stock
## Task 4: View Service Mesh graph and Distributed Traces (Optional)
Perform this task only if you have deployed Kiali and Jaeger in your cluster.
-To visualize what happens behind the scenes and how a trip booking request is processed by the distributed services, you can use the Kiali and Jaeger Dashboards that you started in Lab 5.
+To visualize what happens behind the scenes and how a request to purchase or sell stocks is processed by the distributed services, you can use the Kiali and Jaeger Dashboards that you started in the previous lab.
-1. Open a new browser tab and navigate to the Kiali dashboard URL -
+1. Open a new browser tab and navigate to the Kiali dashboard URL. For example, `http://localhost:20001/kiali`.
2. Select Graph for the otmm namespace.
-3. Open a new browser tab and navigate to the Jaeger dashboard URL -
+3. Open a new browser tab and navigate to the Jaeger dashboard URL. For example, `http://localhost:16686`.
4. In the **Service** drop-down list, select **istio-ingressgateway**. A list of traces is displayed where each trace represents a request.
5. Select a trace to view it.
diff --git a/microtx-xa-stock-broker-app/setup-compute/images/main-config-compute.png b/microtx-xa-stock-broker-app/setup-compute/images/main-config-compute.png
index d67a4055a..b25bbb22b 100644
Binary files a/microtx-xa-stock-broker-app/setup-compute/images/main-config-compute.png and b/microtx-xa-stock-broker-app/setup-compute/images/main-config-compute.png differ
diff --git a/microtx-xa-stock-broker-app/setup-compute/setup-compute-novnc-ssh.md b/microtx-xa-stock-broker-app/setup-compute/setup-compute-novnc-ssh.md
index 84e04167e..afe56d6ea 100644
--- a/microtx-xa-stock-broker-app/setup-compute/setup-compute-novnc-ssh.md
+++ b/microtx-xa-stock-broker-app/setup-compute/setup-compute-novnc-ssh.md
@@ -19,6 +19,7 @@ For more information about Terraform and Resource Manager, please see the append
This lab assumes you have:
- An Oracle Cloud account
- SSH Keys (optional)
+- At least 4 OCPUs, 24 GB memory, and 128 GB of bootable storage volume is available in your Oracle Cloud Infrastructure tenancy to run the Bank and Stock-Trading application.
- You have completed:
- Lab: Prepare Setup
@@ -32,7 +33,7 @@ Your options are:
## Task 1A: Create Stack: Compute + Networking
1. Identify the ORM stack zip file downloaded in *Lab: Prepare Setup*
2. Log in to Oracle Cloud
-3. Open up the hamburger menu in the top left corner. Click **Developer Services**, and choose **Resource Manager > Stacks**. Choose the compartment in which you would like to install the stack. Click **Create Stack**.
+3. Open up the hamburger menu in the top left corner. Click **Developer Services**, and choose **Resource Manager > Stacks**. Choose the compartment in which you would like to install the stack. Click **Create Stack**.
![Select Stacks](https://oracle-livelabs.github.io/common/images/console/developer-resmgr-stacks.png " ")
@@ -72,16 +73,9 @@ Your options are:
Depending on the quota you have in your tenancy you can choose from standard Compute shapes or Flex shapes. Please visit the Appendix: Troubleshooting Tips for instructions on checking your quota
- - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked (unless you plan on using a fixed shape)
- - **Instance Shape:** Keep the default or select from the list of Flex shapes in the dropdown menu (e.g *VM.Standard.E4.Flex*).
- - **Instance OCPUS:** Enter 3 to provision an instance with 3 OCPUs.
-
- If don't have the required quota for Flex Shapes or you prefer to use fixed shapes, follow the instructions below. Otherwise, skip to the next step.
-
- - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Unchecked
- - **Instance Shape:** Accept the default shown or select from the dropdown. e.g. VM.Standard2.2
-
- ![Use fixed shapes](./images/fixed-shape.png " ")
+ - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked.
+ - **Instance Shape:** Select VM.Standard.E4.Flex.
+ - **Instance OCPUS:** Enter 4 to provision an instance with 4 OCPUs. This provisions a VM with 4 OCPUs and 24GB memory.
7. For this section we will provision a new VCN with all the appropriate ingress and egress rules needed to run this workshop. If you already have a VCN, make sure it has all of the correct ingress and egress rules and skip to the next section.
- **Use Existing VCN?:** Accept the default by leaving this unchecked. This will create a **new VCN**.
@@ -151,16 +145,9 @@ If you just completed Task 1A, please proceed to Task 2. If you have an existin
Depending on the quota you have in your tenancy you can choose from standard Compute shapes or Flex shapes. Please visit the Appendix: Troubleshooting Tips for instructions on checking your quota
- - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked (unless you plan on using a fixed shape)
- - **Instance Shape:** Keep the default or select from the list of Flex shapes in the dropdown menu (e.g *VM.Standard.E4.Flex*).
- - **Instance OCPUS:** Accept the default shown. e.g. (**4**) will provision 4 OCPUs and 64GB of memory. You may also elect to reduce or increase the count by selecting from the dropdown. e.g. `[2-24]`. Please ensure you have the capacity available before increasing.
-
- If don't have the required quota for Flex Shapes or you prefer to use fixed shapes, follow the instructions below. Otherwise, skip to the next step.
-
- - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Unchecked
- - **Instance Shape:** Accept the default shown or select from the dropdown. e.g. VM.StandardE2.2
-
- ![Use fixed shapes](./images/fixed-shape.png " ")
+ - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked.
+ - **Instance Shape:** Select VM.Standard.E4.Flex.
+ - **Instance OCPUS:** Enter 4 to provision an instance with 4 OCPUs. This provisions a VM with 4 OCPUs and 24GB memory.
7. For this section we will an existing VNC. Please make sure it has all of the correct ingress and egress rules otherwise go back to *Task 1A* and deploy with a self-contained VCN.
- **Use Existing VCN?:** Check to select.
diff --git a/microtx-xa-stock-broker-app/workshops/desktop/manifest.json b/microtx-xa-stock-broker-app/workshops/desktop/manifest.json
index 8c2843c51..ff2457150 100644
--- a/microtx-xa-stock-broker-app/workshops/desktop/manifest.json
+++ b/microtx-xa-stock-broker-app/workshops/desktop/manifest.json
@@ -22,15 +22,11 @@
"filename": "../../integrate-microtx-lib-files/integrate-microtx-lib-files.md"
},
{
- "title": "Lab 2: Provision Autonomous Databases for Use as Resource Manager",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 3: Deploy the Bank and Stock-Trading Application",
+ "title": "Lab 2: Deploy the Bank and Stock-Trading Application",
"filename": "../../deploy-stock-trading-app/deploy-stock-trading-app.md"
},
{
- "title": "Lab 4: Trade Stocks with the Bank and Stock-Trading Application",
+ "title": "Lab 3: Trade Stocks with the Bank and Stock-Trading Application",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
diff --git a/microtx-xa-stock-broker-app/workshops/sandbox/manifest.json b/microtx-xa-stock-broker-app/workshops/sandbox/manifest.json
index 4ff66de99..2391f00ae 100644
--- a/microtx-xa-stock-broker-app/workshops/sandbox/manifest.json
+++ b/microtx-xa-stock-broker-app/workshops/sandbox/manifest.json
@@ -22,16 +22,11 @@
"filename": "../../integrate-microtx-lib-files/integrate-microtx-lib-files.md"
},
{
- "title": "Lab 3: Provision Autonomous Databases for Use as Resource Manager",
- "type": "sandbox",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 4: Deploy the Bank and Stock-Trading Application",
+ "title": "Lab 3: Deploy the Bank and Stock-Trading Application",
"filename": "../../deploy-stock-trading-app/deploy-stock-trading-app.md"
},
{
- "title": "Lab 5: Trade Stocks with the Bank and Stock-Trading Application",
+ "title": "Lab 4: Trade Stocks with the Bank and Stock-Trading Application",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
diff --git a/microtx-xa-stock-broker-app/workshops/tenancy/manifest.json b/microtx-xa-stock-broker-app/workshops/tenancy/manifest.json
index 87bf0972f..5fb88031c 100644
--- a/microtx-xa-stock-broker-app/workshops/tenancy/manifest.json
+++ b/microtx-xa-stock-broker-app/workshops/tenancy/manifest.json
@@ -27,22 +27,17 @@
"filename": "../../integrate-microtx-lib-files/integrate-microtx-lib-files.md"
},
{
- "title": "Lab 4: Provision Autonomous Databases for use as Resource Manager",
- "type": "tenancy",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 5: Deploy the Bank and Stock-Trading Application",
+ "title": "Lab 4: Deploy the Bank and Stock-Trading Application",
"filename": "../../deploy-stock-trading-app/deploy-stock-trading-app.md"
},
{
- "title": "Lab 6: Trade Stocks with the Bank and Stock-Trading Application",
+ "title": "Lab 5: Trade Stocks with the Bank and Stock-Trading Application",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
"description": "Cleanly dispose all OCI resources created by ORM for the workshop, and delete the stack",
"filename": "https://oracle-livelabs.github.io/common/labs/cleanup-stack/cleanup-stack.md",
- "title": "Lab 7: Clean Up"
+ "title": "Lab 6: Clean Up"
},
{
"title": "Need help?",
diff --git a/multitenant-tde/introduction/introduction.md b/multitenant-tde/introduction/introduction.md
index 95cc385bf..e2090d700 100644
--- a/multitenant-tde/introduction/introduction.md
+++ b/multitenant-tde/introduction/introduction.md
@@ -29,5 +29,5 @@ You may now proceed to the next lab.
## Acknowledgements
- **Authors** - Sean Provost, Enterprise Architect
-- **Contributors** - Mike Sweeney, Bryan Grenn, Bill Pritchett, Rene Fontcha
+- **Contributors** - Mike Sweeney, Bryan Grenn, Bill Pritchett, Divit Gupta, Rene Fontcha
- **Last Updated By/Date** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, August 2023
diff --git a/multitenant-tde/mt-key-mgmt/mt-key-mgmt.md b/multitenant-tde/mt-key-mgmt/mt-key-mgmt.md
index c464b8b49..1b538e65c 100644
--- a/multitenant-tde/mt-key-mgmt/mt-key-mgmt.md
+++ b/multitenant-tde/mt-key-mgmt/mt-key-mgmt.md
@@ -103,7 +103,7 @@ We start off with an unencrypted database and will be validating that state in t
- You can see the default location of the wallet file.
- The wallet status will be given.
- You can see there is no wallet that has been created yet.
-
+
At this point CBD1 does not know about a wallet or encryption
@@ -727,5 +727,5 @@ At this point neither database knows about encryption and there is no wallet set
## Acknowledgements
- **Authors** - Sean Provost, Enterprise Architect
-- **Contributors** - Mike Sweeney, Bryan Grenn, Bill Pritchett, Rene Fontcha
+- **Contributors** - Mike Sweeney, Bryan Grenn, Bill Pritchett, Divit Gupta, Rene Fontcha
- **Last Updated By/Date** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, August 2023
diff --git a/patch-me-if-you-can/00-prepare-setup/00-prepare-setup.md b/patch-me-if-you-can/00-prepare-setup/00-prepare-setup.md
new file mode 100644
index 000000000..836d6dfa9
--- /dev/null
+++ b/patch-me-if-you-can/00-prepare-setup/00-prepare-setup.md
@@ -0,0 +1,65 @@
+# Prepare Setup Daniel Was Here
+
+## Introduction
+
+In this lab, you will download the Oracle Resource Manager (ORM) stack zip file needed to setup the resource needed to run this workshop. This workshop requires a compute instance and a Virtual Cloud Network (VCN).
+
+Estimated Time: 15 minutes
+
+### Objectives
+
+- Download ORM stack
+- Configure an existing Virtual Cloud Network (VCN)
+
+### Prerequisites
+
+This lab assumes you have:
+
+- An Oracle Cloud account
+
+## Task 1: Download Oracle Resource Manager (ORM) stack zip file
+
+1. Click on the link below to download the Resource Manager zip file you need to build your environment: [patch-me-if-you-can.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/upgrade-and-patching/patch-me-if-you-can.zip)
+
+2. Save in your downloads folder.
+
+We strongly recommend using this stack to create a self-contained/dedicated VCN with your instance(s). Skip to *Step 3* to follow our recommendations. If you would rather use an exiting VCN then proceed to the next step as indicated below to update your existing VCN with the required Egress rules.
+
+## Task 2: Adding security rules to an existing VCN
+
+This workshop requires a certain number of ports to be available, a requirement that can be met by using the default ORM stack execution that creates a dedicated VCN. In order to use an existing VCN the following ports should be added to Egress rules
+
+| Port |Description |
+| :------------- | :------------------------------------ |
+| 22 | SSH |
+| 6080 | Remote Desktop noVNC () |
+
+1. Go to *Networking >> Virtual Cloud Networks*
+
+2. Choose your network
+
+3. Under Resources, select Security Lists
+
+4. Click on Default Security Lists under the Create Security List button
+
+5. Click Add Ingress Rule button
+
+6. Enter the following:
+ - Source CIDR: 0.0.0.0/0
+ - Destination Port Range: *Refer to above table*
+
+7. Click the Add Ingress Rules button
+
+## Task 3: Setup compute
+
+Using the details from the two steps above, proceed to the lab *Environment Setup* to setup your workshop environment using Oracle Resource Manager (ORM) and one of the following options:
+ - Create Stack: *Compute + Networking*
+ - Create Stack: *Compute only* with an existing VCN where security lists have been updated as per *Step 2* above
+
+You may now *proceed to the next lab*.
+
+## Acknowledgements
+
+* **Author** - Rene Fontcha, LiveLabs Platform Lead, NA Technology
+* **Contributors** - Meghana Banka, Rene Fontcha, Narayanan Ramakrishnan
+* **Last Updated By/Date** - Rene Fontcha, LiveLabs Platform Lead, NA Technology, January 2021
diff --git a/patch-me-if-you-can/01-installation-patching/01-installation-and-patching.md b/patch-me-if-you-can/01-installation-patching/01-installation-and-patching.md
index 1b04759b9..75d2678ef 100644
--- a/patch-me-if-you-can/01-installation-patching/01-installation-and-patching.md
+++ b/patch-me-if-you-can/01-installation-patching/01-installation-and-patching.md
@@ -138,113 +138,112 @@ You can either copy & paste the entire command (first option) or call a script (
NOTE: *While the installation is ongoing, please switch to the 19.18 tab and continue with the next lab. You will execute the "root.sh" script in one of the next labs.*
-1. Option - Shell Script. For simplicity, you can run the following shell script which does the installation. Otherwise, in option 2 you can run the command yourself.
+1. Option - Shell Script
+
+ *run a shell script (and _only_ run this shell script if you do not want to copy/paste the complete runInstaller command)*
+```text
First, examine the script.
-
- ```
+```
cat /home/oracle/patch/install_patch.sh
- ```
-
+```
Then execute the script.
- ```
+```
sh /home/oracle/patch/install_patch.sh
- ```
+```
*Click to see the output*
+ [CDB2] oracle@hol:/u01/app/oracle/product/1919
+ $ ./runInstaller -applyRU /home/oracle/stage/ru/35042068 \
+ > -applyOneOffs /home/oracle/stage/ojvm/35050341,/home/oracle/stage/dpbp/35261302,/home/oracle/stage/mrp/35333937/34340632,/home/oracle/stage/mrp/35333937/35012562,/home/oracle/stage/mrp/35333937/35037877,/home/oracle/stage/mrp/35333937/35116995,/home/oracle/stage/mrp/35333937/35225526 \
+ > -silent -ignorePrereqFailure -waitforcompletion \
+ > oracle.install.option=INSTALL_DB_SWONLY \
+ > UNIX_GROUP_NAME=oinstall \
+ > INVENTORY_LOCATION=/u01/app/oraInventory \
+ > ORACLE_HOME=/u01/app/oracle/product/1919 \
+ > ORACLE_BASE=/u01/app/oracle \
+ > oracle.install.db.InstallEdition=EE \
+ > oracle.install.db.OSDBA_GROUP=dba \
+ > oracle.install.db.OSOPER_GROUP=dba \
+ > oracle.install.db.OSBACKUPDBA_GROUP=dba \
+ > oracle.install.db.OSDGDBA_GROUP=dba \
+ > oracle.install.db.OSKMDBA_GROUP=dba \
+ > oracle.install.db.OSRACDBA_GROUP=dba \
+ > SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
+ > DECLINE_SECURITY_UPDATES=true
+
+ Preparing the home to patch...
+ Applying the patch /home/oracle/stage/ru/35042068...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/ojvm/35050341...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/dpbp/35261302...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/mrp/35333937/34340632...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/mrp/35333937/35012562...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/mrp/35333937/35037877...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/mrp/35333937/35116995...
+ Successfully applied the patch.
+ Applying the patch /home/oracle/stage/mrp/35333937/35225526...
+ Successfully applied the patch.
+ The log can be found at: /u01/app/oraInventory/logs/InstallActions2023-06-29_12-40-26PM/installerPatchActions_2023-06-29_12-40-26PM.log
+ Launching Oracle Database Setup Wizard...
+
+ The response file for this session can be found at:
+ /u01/app/oracle/product/1919/install/response/db_2023-06-29_12-40-26PM.rsp
+
+ You can find the log of this install session at:
+ /u01/app/oraInventory/logs/InstallActions2023-06-29_12-40-26PM/installActions2023-06-29_12-40-26PM.log
+
+ As a root user, execute the following script(s):
+ 1. /u01/app/oracle/product/1919/root.sh
+
+ Execute /u01/app/oracle/product/1919/root.sh on the following nodes:
+ [hol]
+
+
+ Successfully Setup Software.
+ [CDB2] oracle@hol:/u01/app/oracle/product/1919
+ $
+
- ``` text
-
-The installation will take approximately 10 minutes.
-
-
-[CDB2] oracle@hol:/u01/app/oracle/product/1919
-$ ./runInstaller -applyRU /home/oracle/stage/ru/35042068 \
-> -applyOneOffs /home/oracle/stage/ojvm/35050341,/home/oracle/stage/dpbp/35261302,/home/oracle/stage/mrp/35333937/34340632,/home/oracle/stage/mrp/35333937/35012562,/home/oracle/stage/mrp/35333937/35037877,/home/oracle/stage/mrp/35333937/35116995,/home/oracle/stage/mrp/35333937/35225526 \
-> -silent -ignorePrereqFailure -waitforcompletion \
-> oracle.install.option=INSTALL_DB_SWONLY \
-> UNIX_GROUP_NAME=oinstall \
-> INVENTORY_LOCATION=/u01/app/oraInventory \
-> ORACLE_HOME=/u01/app/oracle/product/1919 \
-> ORACLE_BASE=/u01/app/oracle \
-> oracle.install.db.InstallEdition=EE \
-> oracle.install.db.OSDBA_GROUP=dba \
-> oracle.install.db.OSOPER_GROUP=dba \
-> oracle.install.db.OSBACKUPDBA_GROUP=dba \
-> oracle.install.db.OSDGDBA_GROUP=dba \
-> oracle.install.db.OSKMDBA_GROUP=dba \
-> oracle.install.db.OSRACDBA_GROUP=dba \
-> SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
-> DECLINE_SECURITY_UPDATES=true
-
-Preparing the home to patch...
-Applying the patch /home/oracle/stage/ru/35042068...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/ojvm/35050341...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/dpbp/35261302...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/mrp/35333937/34340632...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/mrp/35333937/35012562...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/mrp/35333937/35037877...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/mrp/35333937/35116995...
-Successfully applied the patch.
-Applying the patch /home/oracle/stage/mrp/35333937/35225526...
-Successfully applied the patch.
-The log can be found at: /u01/app/oraInventory/logs/InstallActions2023-06-29_12-40-26PM/installerPatchActions_2023-06-29_12-40-26PM.log
-Launching Oracle Database Setup Wizard...
-
-The response file for this session can be found at:
- /u01/app/oracle/product/1919/install/response/db_2023-06-29_12-40-26PM.rsp
-
-You can find the log of this install session at:
- /u01/app/oraInventory/logs/InstallActions2023-06-29_12-40-26PM/installActions2023-06-29_12-40-26PM.log
-
-As a root user, execute the following script(s):
- 1. /u01/app/oracle/product/1919/root.sh
-
-Execute /u01/app/oracle/product/1919/root.sh on the following nodes:
-[hol]
-
-
-Successfully Setup Software.
-[CDB2] oracle@hol:/u01/app/oracle/product/1919
-$
- ```
+```
+
+
2. Option - use runInstaller (only execute runInstaller if you didn't execute the shell script)
- ```
-
- ./runInstaller -applyRU /home/oracle/stage/ru/35042068 \
- -applyOneOffs /home/oracle/stage/ojvm/35050341,/home/oracle/stage/dpbp/35261302,/home/oracle/stage/mrp/35333937/34340632,/home/oracle/stage/mrp/35333937/35012562,/home/oracle/stage/mrp/35333937/35037877,/home/oracle/stage/mrp/35333937/35116995,/home/oracle/stage/mrp/35333937/35225526 \
- -silent -ignorePrereqFailure -waitforcompletion \
- oracle.install.option=INSTALL_DB_SWONLY \
- UNIX_GROUP_NAME=oinstall \
- INVENTORY_LOCATION=/u01/app/oraInventory \
- ORACLE_HOME=/u01/app/oracle/product/1919 \
- ORACLE_BASE=/u01/app/oracle \
- oracle.install.db.InstallEdition=EE \
- oracle.install.db.OSDBA_GROUP=dba \
- oracle.install.db.OSOPER_GROUP=dba \
- oracle.install.db.OSBACKUPDBA_GROUP=dba \
- oracle.install.db.OSDGDBA_GROUP=dba \
- oracle.install.db.OSKMDBA_GROUP=dba \
- oracle.install.db.OSRACDBA_GROUP=dba \
- SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
- DECLINE_SECURITY_UPDATES=true
-
- ```
- ![runInstaller output ](./images/run-installer-output.png " ")
+```
+
+./runInstaller -applyRU /home/oracle/stage/ru/35042068 \
+-applyOneOffs /home/oracle/stage/ojvm/35050341,/home/oracle/stage/dpbp/35261302,/home/oracle/stage/mrp/35333937/34340632,/home/oracle/stage/mrp/35333937/35012562,/home/oracle/stage/mrp/35333937/35037877,/home/oracle/stage/mrp/35333937/35116995,/home/oracle/stage/mrp/35333937/35225526 \
+ -silent -ignorePrereqFailure -waitforcompletion \
+ oracle.install.option=INSTALL_DB_SWONLY \
+ UNIX_GROUP_NAME=oinstall \
+ INVENTORY_LOCATION=/u01/app/oraInventory \
+ ORACLE_HOME=/u01/app/oracle/product/1919 \
+ ORACLE_BASE=/u01/app/oracle \
+ oracle.install.db.InstallEdition=EE \
+ oracle.install.db.OSDBA_GROUP=dba \
+ oracle.install.db.OSOPER_GROUP=dba \
+ oracle.install.db.OSBACKUPDBA_GROUP=dba \
+ oracle.install.db.OSDGDBA_GROUP=dba \
+ oracle.install.db.OSKMDBA_GROUP=dba \
+ oracle.install.db.OSRACDBA_GROUP=dba \
+ SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
+ DECLINE_SECURITY_UPDATES=true
+
+```
+![runInstaller output ](./images/run-installer-output.png " ")
Installing the patches takes about ten minutes. While the patch install is ongoing *proceed to the next lab*. You get back to this session at the end of the following lab.
diff --git a/patch-me-if-you-can/02-db-19-18-checks/02-db-19-18-checks.md b/patch-me-if-you-can/02-db-19-18-checks/02-db-19-18-checks.md
index 9428b7fd7..f237a8cfb 100644
--- a/patch-me-if-you-can/02-db-19-18-checks/02-db-19-18-checks.md
+++ b/patch-me-if-you-can/02-db-19-18-checks/02-db-19-18-checks.md
@@ -304,7 +304,9 @@ $
Normally you're now executing the root.sh script. In the lab environment switching to root is forbidden. So you'not going to execute the next steps, instead we post the output you would get:
```
- su root
+
+ su root
+
```
![su root](./images/sudo-root.png " ")
@@ -321,8 +323,10 @@ $
```
+
/u01/app/oracle/product/1919/root.sh
- ```
+
+ ```
![executing root.sh](./images/root-sh.png " ")
@@ -334,7 +338,9 @@ $
```
+
exit
+
```
![exiting root](./images/exit-root.png " ")
@@ -345,7 +351,9 @@ $
Confirm that you are `oracle` again:
```
+
whoami
+
```
![after logon](./images/whoami-oracle.png " ")
diff --git a/patch-me-if-you-can/05-final-checks/05-final-checks.md b/patch-me-if-you-can/05-final-checks/05-final-checks.md
index 75ad90a05..ccbb7031c 100644
--- a/patch-me-if-you-can/05-final-checks/05-final-checks.md
+++ b/patch-me-if-you-can/05-final-checks/05-final-checks.md
@@ -64,7 +64,63 @@ Execute in 19.18 and 19.19 tab:
COMMENT: Even though the upgraded 19.18 database isn't a CDB/PDB, you can use the same statement in both environments.
-## Task 2: Check Time Zone Version
+## Task 2: Check Database Directories
+Check the Database Directory setting in the 19.18 and 19.19 database
+
+```
+
+ set line 200
+ set pages 999
+ col owner format a10
+ col directory_name format a25
+ col directory_path format a50
+ select owner, directory_name , directory_path from dba_directories;
+
+
+ Hit ENTER/RETURN to execute ALL commands.
+```
+
+| 19.18.0 Home | 19.19.0 Home |
+| :------------: | :------------: |
+| ![check db directories in 18](./images/db-directories-18.png " ") | ![check db directories in 19](./images/db-directories-19.png " ") |
+{: title="19.18 and 19.19 Database Directories "}
+
+A few directories (for example **SDO\_DIR\_ADMIN, DBMS\_OPTIM\_LOGDIR...**) in the 19.19 database home do not match; they still refer to the old ORACLE_HOME directory "/u01/app/oracle/product/19/". This can be fixed calling "**utlfixdirs.sql**":
+
+```
+
+ @$ORACLE_HOME/rdbms/admin/utlfixdirs.sql;
+
+
+ Hit ENTER/RETURN to execute ALL commands.
+```
+
+| 19.18.0 Home | 19.19.0 Home |
+| :------------: | :------------: |
+| n/a | ![run utlfixdirs.sql](./images/utlfixdirs-19.png " ") |
+{: title="Fixing Database Directories in 19.19"}
+
+```
+
+ set line 200
+ set pages 999
+ col owner format a10
+ col directory_name format a25
+ col directory_path format a50
+ select owner, directory_name , directory_path from dba_directories;
+
+
+ Hit ENTER/RETURN to execute ALL commands.
+```
+
+| 19.18.0 Home | 19.19.0 Home |
+| :------------: | :------------: |
+| ![check db directories](./images/db-directories-18.png " ") | ![check for invalid objects](./images/db-directories-fixed-19.png " ") |
+{: title="19.18 and 19.19 Database Directories "}
+
+Now they match.
+
+## Task 3: Check Time Zone Version
1. Latest available Time Zone Version
```
@@ -112,7 +168,7 @@ Execute in 19.18 and 19.19 tab:
-## Task 3: Check JDK version
+## Task 4: Check JDK version
Please check whether the Release Update also included an update for JDK.
```
@@ -133,7 +189,7 @@ This is intended. You will always get the n-1 version of JDK, i.e., the version
-## Task 4: Check PERL version
+## Task 5: Check PERL version
Please check whether the Release Update also included an update for PERL. The version before patching was v5.36.0.
```
@@ -152,7 +208,7 @@ Please check whether the Release Update also included an update for PERL. The ve
Now you see no difference. But PERL updates get delivered with Release Updates since January 2023. Hence, in this case, there was no update for 19.19.0.
-## Task 5: Opatch Checks
+## Task 6: Opatch Checks
1. lspatches
```
@@ -185,7 +241,7 @@ Now you see no difference. But PERL updates get delivered with Release Updates s
-## Task 6: You are done!
+## Task 7: You are done!
Congratulations from the entire Oracle Database Upgrade, Migration and Patching team. You completed the Hands-On Lab "Patch me if you can" successfully. Next time, we'll approach the Grid Infrastructure patching together.
diff --git a/patch-me-if-you-can/05-final-checks/images/db-directories-18.png b/patch-me-if-you-can/05-final-checks/images/db-directories-18.png
new file mode 100644
index 000000000..37ad5882b
Binary files /dev/null and b/patch-me-if-you-can/05-final-checks/images/db-directories-18.png differ
diff --git a/patch-me-if-you-can/05-final-checks/images/db-directories-19.png b/patch-me-if-you-can/05-final-checks/images/db-directories-19.png
new file mode 100644
index 000000000..67e37523f
Binary files /dev/null and b/patch-me-if-you-can/05-final-checks/images/db-directories-19.png differ
diff --git a/patch-me-if-you-can/05-final-checks/images/db-directories-fixed-19.png b/patch-me-if-you-can/05-final-checks/images/db-directories-fixed-19.png
new file mode 100644
index 000000000..e37d7a398
Binary files /dev/null and b/patch-me-if-you-can/05-final-checks/images/db-directories-fixed-19.png differ
diff --git a/patch-me-if-you-can/05-final-checks/images/utlfixdirs-19.png b/patch-me-if-you-can/05-final-checks/images/utlfixdirs-19.png
new file mode 100644
index 000000000..9a5189959
Binary files /dev/null and b/patch-me-if-you-can/05-final-checks/images/utlfixdirs-19.png differ
diff --git a/patch-me-if-you-can/workshops/freetier/manifest.json b/patch-me-if-you-can/workshops/freetier/manifest.json
index e1b887ea9..391bf57b7 100644
--- a/patch-me-if-you-can/workshops/freetier/manifest.json
+++ b/patch-me-if-you-can/workshops/freetier/manifest.json
@@ -13,6 +13,18 @@
"description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.",
"filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
},
+ {
+ "title": "Prepare Setup",
+ "description": "How to download your ORM stack and update security rules for an existing VCN",
+ "publisheddate": "09/28/2020",
+ "filename": "../../00-prepare-setup/00-prepare-setup.md"
+ },
+ {
+ "title": "Environment Setup",
+ "description": "How to provision the workshop environment and connect to it",
+ "publisheddate": "06/30/2020",
+ "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc.md"
+ },
{
"title": "Lab 1: Installation and Patching",
"description": "Install Oracle database into a separate OH and patch it unattended.",
diff --git a/patch-me-if-you-can/workshops/ocw23-freetier/manifest.json b/patch-me-if-you-can/workshops/ocw23-freetier/manifest.json
index 2b0c8e367..4e4700947 100644
--- a/patch-me-if-you-can/workshops/ocw23-freetier/manifest.json
+++ b/patch-me-if-you-can/workshops/ocw23-freetier/manifest.json
@@ -24,8 +24,8 @@
"filename": "../../02-db-19-18-checks/02-db-19-18-checks.md"
},
{
- "title": "Lab 3: AutoUpgrade for patching",
- "description": "Use AutoUpgrade for patching",
+ "title": "Lab 3: AutoUpgrade for Patching",
+ "description": "Use AutoUpgrade for Patching",
"filename": "../../03-autoupgrade-4-patching/03-autoupgrade-4-patching.md"
},
{
diff --git a/patch-me-if-you-can/workshops/ocw23-livelabs/manifest.json b/patch-me-if-you-can/workshops/ocw23-livelabs/manifest.json
index fbb8f5c3a..ad0833d22 100644
--- a/patch-me-if-you-can/workshops/ocw23-livelabs/manifest.json
+++ b/patch-me-if-you-can/workshops/ocw23-livelabs/manifest.json
@@ -24,8 +24,8 @@
"filename": "../../02-db-19-18-checks/02-db-19-18-checks.md"
},
{
- "title": "Lab 3: AutoUpgrade for patching",
- "description": "Use AutoUpgrade for patching",
+ "title": "Lab 3: AutoUpgrade for Patching",
+ "description": "Use AutoUpgrade for Patching",
"filename": "../../03-autoupgrade-4-patching/03-autoupgrade-4-patching.md"
},
{
diff --git a/sharding/compact/eshop/eshop.md b/sharding/compact/eshop/eshop.md
index adea363b6..d4fdc7bef 100644
--- a/sharding/compact/eshop/eshop.md
+++ b/sharding/compact/eshop/eshop.md
@@ -257,5 +257,5 @@ If you selected the **Green Button** for this workshop and still have an active
## Acknowledgements
* **Authors** - Shailesh Dwivedi, Database Sharding PM , Vice President
-* **Contributors** - Balasubramanian Ramamoorthy , Alex Kovuru, Nishant Kaushik, Ashish Kumar, Priya Dhuriya, Richard Delval, Param Saini,Jyoti Verma, Virginia Beecher, Rodrigo Fuentes
+* **Contributors** - Balasubramanian Ramamoorthy , Alex Kovuru, Nishant Kaushik, Ashish Kumar, Priya Dhuriya, Richard Delval, Param Saini,Jyoti Verma, Virginia Beecher, Rodrigo Fuentes, Divit Gupta
* **Last Updated By/Date** - Priya Dhuriya, Staff Solution Engineer - July 2021
diff --git a/sharding/compact/intro/intro.md b/sharding/compact/intro/intro.md
index a7d7ef991..a535d8e3d 100644
--- a/sharding/compact/intro/intro.md
+++ b/sharding/compact/intro/intro.md
@@ -39,5 +39,5 @@ You may now proceed to the next lab.
## Acknowledgements
* **Authors** - Shailesh Dwivedi, Database Sharding PM , Vice President
-* **Contributors** - Balasubramanian Ramamoorthy, Alex Kovuru, Nishant Kaushik, Ashish Kumar, Priya Dhuriya, Richard Delval, Param Saini,Jyoti Verma, Virginia Beecher, Rodrigo Fuentes
+* **Contributors** - Balasubramanian Ramamoorthy, Alex Kovuru, Nishant Kaushik, Ashish Kumar, Priya Dhuriya, Richard Delval, Param Saini,Jyoti Verma, Virginia Beecher, Rodrigo Fuentes, Divit Gupta
* **Last Updated By/Date** - Priya Dhuriya, Staff Solution Engineer - July 2021
diff --git a/sharding/uds19c/cleanup/images/stack.png b/sharding/uds19c/cleanup/images/stack.png
new file mode 100644
index 000000000..ca572f448
Binary files /dev/null and b/sharding/uds19c/cleanup/images/stack.png differ
diff --git a/sharding/uds19c/cleanup/uds19c-cleanup.md b/sharding/uds19c/cleanup/uds19c-cleanup.md
new file mode 100644
index 000000000..5625a62f6
--- /dev/null
+++ b/sharding/uds19c/cleanup/uds19c-cleanup.md
@@ -0,0 +1,56 @@
+# Clean-up ORM Stack and Instances
+
+## Introduction
+
+You can permanently delete (terminate) instances that you no longer need. This can be achieved by using destroy job on the Stack in Resource Manager that you created in the Environment Setup Lab. This job will tear down the resources/instances and clean up your tenancy.
+We recommend running a destroy job before deleting a stack to release associated resources first. When you delete a stack, its associated state file is also deleted, so you lose track of the state of its associated resources. Cleaning up resources associated with a deleted stack can be difficult without the state file, especially when those resources are spread across multiple compartments. To avoid difficult cleanup later, we recommend that you release associated resources first by running a destroy job.
+Data cannot be recovered from destroyed resources.
+
+This lab walks you through the steps to running a destroy job
+
+Estimated Time - 5 minutes
+
+### Objectives
+
+- Terminate and tear down all resources/instances used in the Oracle Sharding Lab.
+
+### Prerequisites
+
+- You should have provisioned the **Achieving Data Sovereignty with Oracle Sharding** workshop using a Docker container
+- To provision this workshop, there are detailed instructions in Lab 1 of [Achieving Data Sovereignty with Oracle Sharding](https://apexapps.oracle.com/pls/apex/r/dbpm/livelabs/view-workshop?wid=866) workshop.
+
+## Task 1: Terminate a provisioned Oracle Autonomous Database instance.
+
+1. Login to Oracle Cloud
+
+2. Open the navigation menu and click **Developer Services**. Under **Resource Manager**, click **Stacks**.
+ ![stack](images/stack.png " ")
+
+3. Choose the compartment that you chose in Lab 1 to install your stack (on the left side of the page).
+
+4. Click the name of the stack that you created in Lab 1. The Stack details page opens.
+
+5. Click **Destroy**.
+
+6. In the Destroy panel that is presented, fill in the Name field with name of the destroy job.
+
+7. Click **Destroy**.
+
+8. The destroy job is created. The new job is listed under **Jobs**. Your instance and all resources used by it will begin to terminate
+
+9. After a few minutes, once the instance is terminated, the Lifecycle state will change from Terminating to Terminated.
+
+ You have successfully cleaned up your instance.
+
+## Learn More
+
+- **Oracle Sharding - User-Defined Method**
+[Oracle Sharding - User-Defined Method Documentation for more details] (https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/oracle-sharding-architecture-and-concepts1.html#GUID-37F20817-EFD5-400B-A082-41171C0B6D1C)
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
\ No newline at end of file
diff --git a/sharding/uds19c/initialize-environment/images/uds19c-init-env-docker-containers-status.png b/sharding/uds19c/initialize-environment/images/uds19c-init-env-docker-containers-status.png
new file mode 100644
index 000000000..3a4d90b44
Binary files /dev/null and b/sharding/uds19c/initialize-environment/images/uds19c-init-env-docker-containers-status.png differ
diff --git a/sharding/uds19c/initialize-environment/uds19c-initialize-environment-green-box.md b/sharding/uds19c/initialize-environment/uds19c-initialize-environment-green-box.md
new file mode 100644
index 000000000..0e2f0ad2a
--- /dev/null
+++ b/sharding/uds19c/initialize-environment/uds19c-initialize-environment-green-box.md
@@ -0,0 +1,66 @@
+# Initialize the Environment
+
+## Introduction
+
+In this lab we will review and startup all components required to successfully run this workshop.
+
+*Estimated Time:* 10 Minutes.
+
+Watch the video for a quick walk through of the Initialize Environment lab.
+
+[Initialize Environment lab](youtube:e3EXx3BMhec)
+
+### Objectives
+- Initialize the workshop environment.
+
+### Prerequisites
+This lab assumes you have requested a Livelabs instance with access details.
+
+## Task 1: Validate that required processes are up and running
+1. With access to your remote desktop session, proceed as indicated below to validate your environment before you start running the subsequent labs. The following processes should be up and running:
+
+ - Oracle Sharding GSM1 Container
+ - Oracle Sharding GSM2 Container
+ - Oracle Sharding Catalog container
+ - Three Oracle shard Database containers
+ - Appclient Container
+
+2. Open a terminal session and proceed as indicated below to validate the services.
+
+ - Oracle Sharding container Details
+
+ ```
+
+ sudo docker ps -a
+
+ ```
+ ![sharding docker](images/uds19c-init-env-docker-containers-status.png " ")
+
+ - If a container is stopped and not in running state then try to restart it by using the following Docker command.
+
+ ```
+
+ sudo docker stop
+
+
+ sudo docker start
+
+ ```
+ - For multiple containers, run the following commands to restart all of them at once:
+
+ ```
+
+ sudo docker container stop $(sudo docker container list -qa)
+
+
+ sudo docker container start $(sudo docker container list -qa)
+
+ ```
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
\ No newline at end of file
diff --git a/sharding/uds19c/initialize-environment/uds19c-initialize-environment.md b/sharding/uds19c/initialize-environment/uds19c-initialize-environment.md
new file mode 100644
index 000000000..df458473b
--- /dev/null
+++ b/sharding/uds19c/initialize-environment/uds19c-initialize-environment.md
@@ -0,0 +1,70 @@
+# Initialize the Environment
+
+## Introduction
+
+In this lab we will review and startup all components required to successfully run this workshop.
+
+*Estimated Time:* 10 Minutes.
+
+Watch the video for a quick walk through of the Initialize Environment lab.
+
+[Initialize Environment lab](youtube:e3EXx3BMhec)
+
+### Objectives
+- Initialize the workshop environment.
+
+### Prerequisites
+This lab assumes you have:
+- An Oracle Cloud account
+- You have completed:
+ - Lab: Prepare Setup
+ - Lab: Environment Setup
+
+## Task 1: Validate that required processes are up and running
+1. With access to your remote desktop session, proceed as indicated below to validate your environment before you start running the subsequent labs. The following processes should be up and running:
+
+ - Oracle Sharding GSM1 Container
+ - Oracle Sharding GSM2 Container
+ - Oracle Sharding Catalog container
+ - Three Oracle shard Database containers
+ - Appclient Container
+
+2. Open a terminal session and proceed as indicated below to validate the services.
+
+ - Oracle Sharding container Details
+
+ ```
+
+ sudo docker ps -a
+
+ ```
+ ![sharding docker](images/uds19c-init-env-docker-containers-status.png " ")
+
+ - If a container is stopped and not in running state then try to restart it by using the following Docker command.
+
+ ```
+
+ sudo docker stop
+
+
+ sudo docker start
+
+ ```
+ - For multiple containers, run the following commands to restart all of them at once:
+
+ ```
+
+ sudo docker container stop $(sudo docker container list -qa)
+
+
+ sudo docker container start $(sudo docker container list -qa)
+
+ ```
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
\ No newline at end of file
diff --git a/sharding/uds19c/prepare-setup/uds19c-prepare-setup.md b/sharding/uds19c/prepare-setup/uds19c-prepare-setup.md
new file mode 100644
index 000000000..cbd3f911d
--- /dev/null
+++ b/sharding/uds19c/prepare-setup/uds19c-prepare-setup.md
@@ -0,0 +1,58 @@
+# Prepare Setup
+
+## Introduction
+This lab will show you how to download the Oracle Resource Manager (ORM) stack zip file needed to setup the resource needed to run this workshop. This workshop requires a compute instance running the Oracle Database Sharding Marketplace image and a Virtual Cloud Network (VCN).
+
+*Estimated Time:* 10 minutes
+
+Watch the video for a quick walk through of the Prepare Setup lab.
+
+[Prepare Lab Setup](youtube:DTIGmlj7Y3I)
+
+### Objectives
+- Download ORM stack
+- Configure an existing Virtual Cloud Network (VCN)
+
+### Prerequisites
+This lab assumes you have:
+- An Oracle Cloud account
+
+## Task 1: Download Oracle Resource Manager (ORM) stack zip file
+
+1. Click on the link below to download the Resource Manager zip file you need to build your environment: [ll-orm-mkplc-freetier-uds-19c-ds-image-marketplace-publish.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/ll-orm-mkplc-freetier-uds-19c-ds-image-marketplace-publish.zip)
+
+2. Save in your downloads folder.
+
+We strongly recommend using this stack to create a self-contained/dedicated VCN with your instance(s). Skip to *Task 3* to follow our recommendations. If you would rather use an exiting VCN then proceed to the next step as indicated below to update your existing VCN with the required Egress rules.
+
+## Task 2: Adding Security Rules to an Existing VCN
+This workshop requires a certain number of ports to be available, a requirement that can be met by using the default ORM stack execution that creates a dedicated VCN. In order to use an existing VCN the following ports should be added to Egress rules
+
+Table 1: Learn how to Achieve Data Sovereignty with Oracle Sharding
+ToDo Add Table
+1. Go to *Networking >> Virtual Cloud Networks*
+2. Choose your network
+3. Under Resources, select Security Lists
+4. Click on Default Security Lists under the Create Security List button
+5. Click Add Ingress Rule button
+6. Enter the following:
+ - Source CIDR: 0.0.0.0/0
+ - Destination Port Range: *Refer to above table*
+7. Click the Add Ingress Rules button
+
+## Task 3: Setup Compute
+Using the details from the two steps above, proceed to the lab *Environment Setup* to setup your workshop environment using Oracle Resource Manager (ORM) and one of the following options:
+- Create Stack: *Compute + Networking*
+- Create Stack: *Compute only* with an existing VCN where security lists have been updated as per *Task 2* above
+
+Please note for Data Sovereignty with Oracle Sharding Lab:
+- Recommended memory: 48G
+- Recommended CPU: 6 OCPU
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
diff --git a/sharding/uds19c/uds19c-ddl-dml/images/uds19c-connect-catalog-docker-image.png b/sharding/uds19c/uds19c-ddl-dml/images/uds19c-connect-catalog-docker-image.png
new file mode 100644
index 000000000..7ab96bcd1
Binary files /dev/null and b/sharding/uds19c/uds19c-ddl-dml/images/uds19c-connect-catalog-docker-image.png differ
diff --git a/sharding/uds19c/uds19c-ddl-dml/images/uds19c-init-env-docker-containers-status.png b/sharding/uds19c/uds19c-ddl-dml/images/uds19c-init-env-docker-containers-status.png
new file mode 100644
index 000000000..3a4d90b44
Binary files /dev/null and b/sharding/uds19c/uds19c-ddl-dml/images/uds19c-init-env-docker-containers-status.png differ
diff --git a/sharding/uds19c/uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md b/sharding/uds19c/uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md
new file mode 100644
index 000000000..7cccc78cc
--- /dev/null
+++ b/sharding/uds19c/uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md
@@ -0,0 +1,203 @@
+# Sample User-Defined Sharded Schema
+
+## Introduction
+
+The sharded database schema can be created when you are done configuring the user defined sharded environment, and you find the GDSCTL VALIDATE command result as expected without error. In this lab all DDL steps are for informational purposes only, and those are already done in the lab environment, so that you can query the sharded database and verify how you can achieve data sovereignty with Oracle's user-defined sharding method in the distributed databases.
+
+*Estimated Time*: 30 minutes
+
+### Objectives
+
+In this lab, you will:
+
+* Learn how to create a sharded database schema in the user-defined sharding environment, create sharded tables, duplicated tables, and run a few DMLs.
+* Testing the use-cases
+
+### Prerequisites
+
+This lab assumes you have:
+
+* An Oracle Cloud account
+* You have completed:
+ * Lab: Prepare Setup
+ * Lab: Environment Setup
+ * Lab: Initialize Environment
+ * Lab: Explore User-Defined Sharding Topology
+
+## Task 1: Check for containers in your VM and Connect to Catalog Database.
+
+1. Open a terminal window and execute the following as **opc** user.
+
+ ```
+
+ sudo docker ps -a
+
+ ```
+
+ ![](images/uds19c-init-env-docker-containers-status.png " ")
+
+2. Connect catalog image (pcatalog)
+
+ ```
+
+ sudo docker exec -it pcatalog /bin/bash
+
+ ```
+
+ ![](images/uds19c-connect-catalog-docker-image.png " ")
+
+## Task 2: Connect as SYSDBA user to create a sharded database schema user
+
+1. Create a sharded database schema user.
+
+ ```
+ sqlplus / as sysdba
+
+ show pdbs
+ alter session set container=PCAT1PDB;
+ alter session enable shard ddl;
+
+ -- If sharded user (transactions) already exists drop that before re-create user
+ -- drop user transactions cascade;
+ CREATE USER transactions IDENTIFIED BY WElcomeHome123##;
+ ```
+
+2. Grant roles to the user.
+
+ ```
+ GRANT CONNECT, RESOURCE, alter session TO transactions;
+ GRANT SELECT_CATALOG_ROLE TO transactions;
+ GRANT UNLIMITED TABLESPACE TO transactions;
+ GRANT CREATE DATABASE LINK TO transactions;
+ GRANT EXECUTE ON DBMS_CRYPTO TO transactions;
+ GRANT CREATE MATERIALIZED VIEW TO transactions;
+ ```
+
+3. Create tablespaces for shard1 and shard2 in the respective shardspaces for each shard. Also create a tablespace for duplicated tables.
+
+ ```
+ CREATE TABLESPACE tbs_shardspace1 IN SHARDSPACE shardspace1;
+ CREATE TABLESPACE tbs_shardspace2 IN SHARDSPACE shardspace2;
+ CREATE TABLESPACE tbs_dup;
+ ```
+
+4. Connect as the schema user to create sharded table(s), a duplicated table and populated them with data.
+
+ ```
+ sqlplus transactions/WElcomeHome123##@PCAT1PDB;
+ ```
+
+5. If sharded tables (payments and accounts) already exists, drop those before you re-create the tables.
+
+ ```
+ drop table payments cascade constraints;
+ drop table accounts cascade constraints;
+ ```
+
+6. Create the root (parent) sharded table in the user-defined sharding table family for Data Sovereignty.
+
+ ```
+ CREATE SHARDED TABLE accounts
+ (
+ country_cd VARCHAR2(10) NOT NULL
+ ,account_id NUMBER(38,0) NOT NULL
+ ,user_id NUMBER(38,0) NOT NULL
+ ,balance NUMBER NOT NULL
+ ,last_modified_utc TIMESTAMP NOT NULL
+ )
+ PARTITION BY LIST (country_cd)
+ (
+ PARTITION p_shard1 VALUES
+ ('USA','CAN','BRA','MEX') TABLESPACE tbs_shardspace1
+ ,PARTITION p_shard2 VALUES
+ ('IND','DEU','FRA','CHN','AUS','ZAF','JPN') TABLESPACE tbs_shardspace2
+ );
+ ```
+
+7. Create a unique index explicitly while adding the primary key in parent table, Accounts.
+
+ ```
+
+ create unique index accounts_pk_idx ON accounts (account_id, country_cd) local;
+ alter table transactions.accounts add constraint accounts_pk primary key (account_id, country_cd) using index accounts_pk_idx;
+ ```
+
+8. Create a child sharded table in the user-defined sharding table family for Data Sovereignty.
+
+ ```
+ CREATE SHARDED TABLE payments
+ (
+ country_cd VARCHAR2(10) NOT NULL
+ ,account_id NUMBER(38,0) NOT NULL
+ ,payment_id NUMBER(38,0) NOT NULL
+ ,amount NUMBER(28,3) NOT NULL
+ ,payment_type VARCHAR2(10) NOT NULL
+ ,created_utc TIMESTAMP NOT NULL
+ )
+ PARENT accounts
+ PARTITION BY LIST (country_cd)
+ (
+ PARTITION p_shard1 VALUES
+ ('USA','CAN','BRA','MEX') TABLESPACE tbs_shardspace1
+ ,PARTITION p_shard2 VALUES
+ ('IND','DEU','FRA','CHN','AUS','ZAF','JPN') TABLESPACE tbs_shardspace2
+ );
+ ```
+
+9. Create unique index explicitly while adding Primary Key in child table payments
+
+ ```
+ create unique index payments_pk_idx ON transactions.payments (payment_id, account_id, country_cd) local;
+ alter table transactions.payments add constraint payments_pk primary key (payment_id, account_id, country_cd) using index payments_pk_idx;
+ ```
+
+10. Add a foreign key in the Payments table that refers the Accounts table.
+
+ ```
+ alter table transactions.payments add constraint payments_fk foreign key (account_id, country_cd) references accounts(account_id, country_cd);
+ ```
+
+## Task 3. Insert data into the parent sharded table, Accounts.
+1. From the same database connection (already connected to transactions user from the shard catalog database), insert a few sample records for each sharding key (country_cd) defined in the CREATE DDL statement for the Accounts table. Data is already inserted and the following DDLs are for reference.
+
+ ```
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('USA',1,1,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('CAN',2,2,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('IND',3,3,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('DEU',4,4,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('BRA',5,5,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('CHN',6,6,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('MEX',7,7,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('FRA',8,8,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('AUS',9,9,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('ZAF',10,10,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('USA',11,11,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('IND',12,12,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('JPN',13,13,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('JPN',14,14,10000,sysdate);
+ insert into accounts(COUNTRY_CD, ACCOUNT_ID, USER_ID, BALANCE, LAST_MODIFIED_UTC) values ('JPN',15,15,10000,sysdate);
+ commit;
+ ```
+
+## Task 4. Create a duplicated table.
+1. Duplicated tables are created when your sharded database needs the same data on the shard catalog and all of the shards. Create a duplicated table, Account_type:
+
+ ```
+ create duplicated table account_type(id number(2) primary key, account_type_cd varchar2(10), account_desc varchar2(100)) tablespace tbs_dup;
+ ```
+
+## Task 5. Insert sample data in a duplicated table.
+1. Data is already inserted in the table and the following DDLs are for reference.
+ ```
+ insert into account_type(id, account_type_cd, account_desc) values (1,'checking','Checking Account Type');
+ insert into account_type(id, account_type_cd, account_desc) values (2,'savings','Savings Account Type');
+ commit;
+ ```
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
diff --git a/sharding/uds19c/uds19c-intro/images/uds_intro_request_flow.png b/sharding/uds19c/uds19c-intro/images/uds_intro_request_flow.png
new file mode 100644
index 000000000..b6c4fbb99
Binary files /dev/null and b/sharding/uds19c/uds19c-intro/images/uds_intro_request_flow.png differ
diff --git a/sharding/uds19c/uds19c-intro/uds19c-intro.md b/sharding/uds19c/uds19c-intro/uds19c-intro.md
new file mode 100644
index 000000000..f66939c8a
--- /dev/null
+++ b/sharding/uds19c/uds19c-intro/uds19c-intro.md
@@ -0,0 +1,47 @@
+# Introduction
+
+## About the Oracle Globally Distributed Database 19c Solution for Data Sovereignty
+
+Data sovereignty generally refers to how data is governed by regulations specific to the region in which it originated. These types of regulations can specify where data is stored, how it is accessed, how it is processed, and the life-cycle of the data.
+
+**Data Sovereignty**: Understanding Its Significance
+
+Data sovereignty is a country- or region-specific requirement that data is subject to the laws of the country/region in which it is collected or processed and must remain within its borders / data centers. Therefore, organizations must pay close attention to how they manage their data with data localization.
+Data sovereignty can also be referred to by other terms such as data residency, data locality, or data localization, and can be implemented across one or multiple regions based on the specific needs of the organization, while still adhering to the regulations set forth by monitoring authorities within that country or region.
+
+**Oracle Sharding**: A Solution for Globally Distributed Systems
+
+Oracle Sharding distributes segments of a data set across many databases (shards) on different computers, on-premises, or in the cloud. It enables globally distributed, linearly scalable, multi-model databases. It requires no specialized hardware or software.
+Oracle Sharding does all of this while maintaining strong consistency, the full power of SQL, support for structured and unstructured data, and the Oracle Database ecosystem. It meets data sovereignty requirements and it supports applications that require low latency and high availability.
+
+*Estimated Workshop Time:* 2 hours
+
+![Data Sovereignty with Oracle Sharding introduction](images/uds_intro_request_flow.png " ")
+
+### Objectives
+
+In this workshop, you will gain first-hand experience in implementing the Data Sovereignty uses cases with Oracle's user-defined data distribution method, enabling Data Localization in an effective manner for robust distributed database solutions.
+
+Once you complete your setup, the next lab will cover:
+
+- Exploring the user-defined sharding method implementation for Data Sovereignty
+- Testing the use-cases
+
+We will use Docker containers and demonstrate multiple use cases.
+
+### Prerequisites
+
+- An Oracle Cloud Account - Please view this workshop's LiveLabs landing page to see which environments are supported
+
+You may now **proceed to the next lab**
+
+## Learn More
+
+- [Achieving Data Sovereignty with Oracle Sharding : Release 23 Internal link](https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/achieving-data-sovereignty-oracle-sharding1.html#GUID-4AA1D64A-F89B-462A-BA4E-F04038665999)
+
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-docker-image.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-docker-image.png
new file mode 100644
index 000000000..7ab96bcd1
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-docker-image.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-duplicated-table-count.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-duplicated-table-count.png
new file mode 100644
index 000000000..c48cec38e
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-duplicated-table-count.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-sharded-table-queries.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-sharded-table-queries.png
new file mode 100644
index 000000000..51cd8e377
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-catalog-sharded-table-queries.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-directRoutingApp.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-directRoutingApp.png
new file mode 100644
index 000000000..93a437082
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-directRoutingApp.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-1.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-1.png
new file mode 100644
index 000000000..f5fc2b087
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-1.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-2.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-2.png
new file mode 100644
index 000000000..8fe6ffaac
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-gsm-service-shard-2.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-docker-image.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-docker-image.png
new file mode 100644
index 000000000..bb7ef87bb
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-docker-image.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-duplicated-table-count.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-duplicated-table-count.png
new file mode 100644
index 000000000..6c2ce07a0
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-duplicated-table-count.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-sharded-table-queries.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-sharded-table-queries.png
new file mode 100644
index 000000000..4fa53b4b2
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard1-sharded-table-queries.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-docker-image.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-docker-image.png
new file mode 100644
index 000000000..651a7e333
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-docker-image.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-duplicated-table-count.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-duplicated-table-count.png
new file mode 100644
index 000000000..764de0042
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-duplicated-table-count.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-sharded-table-queries.png b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-sharded-table-queries.png
new file mode 100644
index 000000000..91e8ec2fe
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-connect-shard2-sharded-table-queries.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-init-env-docker-containers-status.png b/sharding/uds19c/uds19c-queries/images/uds19c-init-env-docker-containers-status.png
new file mode 100644
index 000000000..3a4d90b44
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-init-env-docker-containers-status.png differ
diff --git a/sharding/uds19c/uds19c-queries/images/uds19c-query-gds-catalog-local-service.png b/sharding/uds19c/uds19c-queries/images/uds19c-query-gds-catalog-local-service.png
new file mode 100644
index 000000000..01b062712
Binary files /dev/null and b/sharding/uds19c/uds19c-queries/images/uds19c-query-gds-catalog-local-service.png differ
diff --git a/sharding/uds19c/uds19c-queries/uds19c-queries.md b/sharding/uds19c/uds19c-queries/uds19c-queries.md
new file mode 100644
index 000000000..4ee9ba802
--- /dev/null
+++ b/sharding/uds19c/uds19c-queries/uds19c-queries.md
@@ -0,0 +1,204 @@
+# Sample Queries to Validate for Sharded Databases
+
+## Introduction
+
+When the user-defined sharded database schema is ready, you can query the sharded tables on the shard catalog and on each shard to validate the data.
+
+*Estimated Time*: 30 minutes
+
+### Objectives
+
+In this lab, you will:
+
+* Learn how to validate a sharded database schema (already created) in a user-defined sharding environment, query sharded tables, and query duplicated tables.
+* Testing the use-cases
+
+### Prerequisites
+
+This lab assumes you have:
+
+* An Oracle Cloud account
+* You have completed:
+ * Lab: Prepare Setup
+ * Lab: Environment Setup
+ * Lab: Initialize Environment
+ * Lab: Explore User-Defined Sharding Topology
+ * Lab: Sample User-Defined Sharding Schema and Data insertion
+
+
+## Task 1: Connect as the sharded database schema user to query sharded tables.
+
+1. Check for containers in your VM. To do this, open a terminal window and execute the following as **opc** user.
+
+ ```
+
+ sudo docker ps -a
+
+ ```
+
+ ![](images/uds19c-init-env-docker-containers-status.png " ")
+
+
+2. The user-defined sharded database schema and tables are created, and data is inserted for this lab. Connect to the Shard1, Shard2, and Catalog Databases and compare query results from sharded table Accounts on each database.
+ ```
+
+ -- Run a query to count accounts for all countries in Shard1 DB, Shard2 DB, Catalog DB and compare results.
+
+ select count(account_id) from accounts;
+
+ --Run a query to count accounts group by country in Shard1 DB, Shard2 DB, Catalog DB and compare results.
+
+ select COUNTRY_CD, count(account_id) from accounts group by COUNTRY_CD order by COUNTRY_CD;
+
+ ```
+
+3. Connect to Shard1 and run queries on the sharded table Accounts. A total of 5 accounts and 4 Countries are in Shard1.
+
+ ```
+
+ sudo docker exec -it shard1 /bin/bash
+
+ ```
+ ![](images/uds19c-connect-shard1-docker-image.png " ")
+
+
+ ```
+
+ sqlplus transactions/WElcomeHome123##@PORCL1PDB;
+ select count(account_id) from accounts;
+ select COUNTRY_CD, count(account_id) from accounts group by COUNTRY_CD order by COUNTRY_CD;
+
+ ```
+
+ ![](images/uds19c-connect-shard1-sharded-table-queries.png " ")
+
+
+4. Connect to Shard2 and run queries on the sharded table Accounts. A total of 10 accounts and 7 Countries are in Shard2.
+
+
+ ```
+
+ sudo docker exec -it shard2 /bin/bash
+
+ ```
+ ![](images/uds19c-connect-shard2-docker-image.png " ")
+
+
+ ```
+
+ sqlplus transactions/WElcomeHome123##@PORCL2PDB;
+ select count(account_id) from accounts;
+ select COUNTRY_CD, count(account_id) from accounts group by COUNTRY_CD order by COUNTRY_CD;
+
+ ```
+
+ ![](images/uds19c-connect-shard2-sharded-table-queries.png " ")
+
+
+
+5. Connect to the Catalog and run cross shard queries on sharded table accounts. A total of 15 accounts and 11 countries are in the Catalog, which matches the sums for accounts (5+10=15) and countries (4+7=11) from both shards. This exercise confirms that Oracle Sharding with user-defined sharding allows you to implement Data Sovereignty use cases.
+
+ ```
+
+ sudo docker exec -it pcatalog /bin/bash
+
+ ```
+ ![](images/uds19c-connect-catalog-docker-image.png " ")
+
+
+ ```
+
+ sqlplus transactions/WElcomeHome123##@PCAT1PDB;
+ select count(account_id) from accounts;
+ select COUNTRY_CD, count(account_id) from accounts group by COUNTRY_CD order by COUNTRY_CD;
+
+ ```
+
+ ![](images/uds19c-connect-catalog-sharded-table-queries.png " ")
+
+
+## Task 3: Validate a duplicated table query on each shard and on the catalog database.
+
+1. Connect to each sharded DBs, run the same query for a duplicated table and results would be same from each sharded dbs. ALL DDL and DML operations for Duplicated table recommended to be performed at the catalog DB.
+
+ ```
+
+ select * from account_type;
+
+ ```
+
+2. Connect Shard1 and run a query on duplicated table to select rows.
+
+
+ ![](images/uds19c-connect-shard1-duplicated-table-count.png " ")
+
+
+3. Connect Shard2 and run a query on duplicated table to select rows.
+
+
+ ![](images/uds19c-connect-shard2-duplicated-table-count.png " ")
+
+
+4. Connect Catalog and run a query on duplicated table to select rows.
+
+
+ ![](images/uds19c-connect-catalog-duplicated-table-count.png " ")
+
+
+## Task 4: Connect Catalog DB using GSM local service : GDS$CATALOG.
+
+1. Connect the Catalog using gsm service for proxy routing and run a cross shard query
+ ```
+
+ sqlplus transactions/WElcomeHome123##@oshard-gsm1.example.com:1522/GDS\$CATALOG.oradbcloud
+
+ ```
+
+ ![](images/uds19c-query-gds-catalog-local-service.png " ")
+
+
+## Task 5: Connect and query using global services which were created by gdsctl add service command.
+
+1. This kind of connections to be used from Application to provide a sharding key for runtime DB connection.
+
+ ```
+
+ -- Connect oltp_rw_svc service used with direct-routing by applications: randomly connect to a shard
+ sqlplus transactions/WElcomeHome123##@'(DESCRIPTION=(ADDRESS=(HOST=oshard-gsm1.example.com)(PORT=1522)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=oltp_rw_svc.shardcatalog1.oradbcloud)))'
+
+ ```
+
+ ![](images/uds19c-connect-gsm-service-directRoutingApp.png " ")
+
+
+## Task 6: Connect Shard1 using gsm service for direct routing and run a query.
+
+1. This kind of connections to be used from Application to provide a sharding key belongs to shard1 for DB connection to Shard1.
+
+ ```
+
+ connect oltp_rw_svc service used with direct-routing by applications: connects to shard1 using sharding_key=USA
+ sqlplus transactions/WElcomeHome123##@'(DESCRIPTION=(ADDRESS=(HOST=oshard-gsm1.example.com)(PORT=1522)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=oltp_rw_svc.shardcatalog1.oradbcloud)(SHARDING_KEY=USA)))'
+
+ ```
+![](images/uds19c-connect-gsm-service-shard-1.png " ")
+
+
+## Task 7: Connect Shard2 using gsm service for direct routing and run a query.
+
+1. This kind of connections to be used from Application to provide a sharding key belongs to shard2 for DB connection to Shard2.
+
+ ```
+
+ connect oltp_rw_svc service used with direct-routing by applications: connects to shard2 using sharding_key=IND
+ sqlplus transactions/WElcomeHome123##@'(DESCRIPTION=(ADDRESS=(HOST=oshard-gsm1.example.com)(PORT=1522)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=oltp_rw_svc.shardcatalog1.oradbcloud)(SHARDING_KEY=IND)))'
+
+ ```
+![](images/uds19c-connect-gsm-service-shard-2.png " ")
+
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
\ No newline at end of file
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-docker-gsm1.png b/sharding/uds19c/uds19c-topology/images/uds19c-docker-gsm1.png
new file mode 100644
index 000000000..f469d0200
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-docker-gsm1.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-chunks.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-chunks.png
new file mode 100644
index 000000000..63b9be636
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-chunks.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-service.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-service.png
new file mode 100644
index 000000000..539ad8cc8
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-service.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-shard.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-shard.png
new file mode 100644
index 000000000..56a8d37b4
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-shard.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-table-family.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-table-family.png
new file mode 100644
index 000000000..ff851f1bb
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config-table-family.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config.png
new file mode 100644
index 000000000..6fc14b4f2
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-config.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-by-count.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-by-count.png
new file mode 100644
index 000000000..d718ea150
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-by-count.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-failed_only.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-failed_only.png
new file mode 100644
index 000000000..33ba41f62
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl-failed_only.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl.png
new file mode 100644
index 000000000..55380a5b1
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-show-ddl.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-status-gsm.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-status-gsm.png
new file mode 100644
index 000000000..e7d63b7e7
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-status-gsm.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-validate.png b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-validate.png
new file mode 100644
index 000000000..9cc50bb4c
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-gdsctl-validate.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c-init-env-docker-containers-status.png b/sharding/uds19c/uds19c-topology/images/uds19c-init-env-docker-containers-status.png
new file mode 100644
index 000000000..3a4d90b44
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c-init-env-docker-containers-status.png differ
diff --git a/sharding/uds19c/uds19c-topology/images/uds19c_gdsctl-config_sdb_replication_type_dg.png b/sharding/uds19c/uds19c-topology/images/uds19c_gdsctl-config_sdb_replication_type_dg.png
new file mode 100644
index 000000000..fbddc3758
Binary files /dev/null and b/sharding/uds19c/uds19c-topology/images/uds19c_gdsctl-config_sdb_replication_type_dg.png differ
diff --git a/sharding/uds19c/uds19c-topology/uds19c-topology.md b/sharding/uds19c/uds19c-topology/uds19c-topology.md
new file mode 100644
index 000000000..f6cf28cf8
--- /dev/null
+++ b/sharding/uds19c/uds19c-topology/uds19c-topology.md
@@ -0,0 +1,194 @@
+# Explore Oracle's User-Defined Sharding Method Topology to Achieve Data Sovereignty
+
+## Introduction
+
+User-defined sharding lets you explicitly specify the mapping of data to individual shards. It is used when, because of performance, regulatory, or other reasons, certain data needs to be stored on a particular shard, and the administrator needs to have full control over moving data between shards.
+
+Oracle Sharding is a scalability and availability feature for custom-designed OLTP and OLAP applications that enables the distribution and replication of data across a pool of Oracle databases that do not share hardware or software. The pool of databases is presented to the application as a single logical database.
+
+For a user-defined sharded database, two replication schemes are supported: Oracle Data Guard or Oracle Active Data Guard. Oracle GoldenGate can be used used as the incremental replication method. Oracle Data Guard with Oracle GoldenGate enables fast automatic failover with zero data loss.
+
+This workshop is configured with a custom image that has all of the required Docker containers for Oracle Sharding using release 19c GSM and Database Images.
+
+In this workshop, we attempt to use minimal resources to demonstrate the use cases, so you need only a single compute instance to install all of the Oracle Sharding components.
+
+*Estimated Time*: 30 minutes
+
+### Objectives
+
+In this lab, you will:
+
+* Explore user-defined Sharding configuration steps.
+* Testing the use-cases
+
+### Prerequisites
+
+This lab assumes you have:
+* An Oracle Cloud account
+* You have completed:
+ * Lab: Prepare Setup
+ * Lab: Environment Setup
+ * Lab: Initialize Environment
+
+
+## Task 1: Explore the user-defined sharding
+
+1. Check for containers in your VM. To do this, open a terminal window and execute the following as **opc** user.
+
+ ```
+
+ sudo docker ps -a
+
+ ```
+
+ ![](images/uds19c-init-env-docker-containers-status.png " ")
+
+2. The user-defined sharding method provides a means to achieve regulatory compliance by enabling user-defined data placement It allows you to use a range or list of countries to partition data among the shards by letting you explicitly specify the mapping of data to individual shards.
+ User-Defined Sharding Definitions
+ Partition by list defines lists of sharding key values mapped to specific shards.
+ Partition by range creates ranges of sharding keys values mapped to specific shards.
+ In user-defined sharding, a shardspace consists of a shard or a set of fully replicated shards. See Shard-Level High Availability for details about replication with user-defined sharding. For simplicity, assume that each shardspace consists of a single shard.
+ For an overview and detailed sections of Oracle Sharding methods, visit Oracle Sharding Methods.
+
+ For more details check [Configure the Sharded Database Topology] ()
+
+3. Run in the terminal as **oracle** user and connect to the shard director server.
+
+ ```
+
+ sudo docker exec -i -t gsm1 /bin/bash
+
+ ```
+
+ ![](images/uds19c-docker-gsm1.png " ")
+
+4. Verify sharding topology using the **CONFIG** command.
+
+ ```
+
+ gdsctl config shard
+
+ ```
+
+ ![](images/uds19c-gdsctl-config-shard.png " ")
+
+5. Display the sharding topology configuration.
+
+ ```
+
+ gdsctl config
+
+ ```
+
+ ![](images/uds19c-gdsctl-config.png " ")
+
+6. Display the GSM status.
+
+ ```
+
+ gdsctl status gsm
+
+ ```
+
+ ![](images/uds19c-gdsctl-status-gsm.png " ")
+
+7. Display the global services configured.
+
+ ```
+
+ gdsctl config service
+
+ ```
+
+ ![](images/uds19c-gdsctl-config-service.png " ")
+
+8. Display the recent 10 DDLs.
+
+ ```
+
+ gdsctl show ddl
+
+ ```
+
+ ![](images/uds19c-gdsctl-show-ddl.png " ")
+
+9. Display the DDLs by count.
+
+ ```
+
+ gdsctl show ddl -count 20
+
+ ```
+
+ ![](images/uds19c-gdsctl-show-ddl-by-count.png " ")
+
+10. Display the failed DDLs only.
+
+ ```
+
+ gdsctl show ddl -failed_only
+
+ ```
+
+ ![](images/uds19c-gdsctl-show-ddl-failed_only.png " ")
+
+11. Lists all of the database shards and the chunks that they contain.
+
+ ```
+
+ gdsctl config chunks
+
+ ```
+
+ ![](images/uds19c-gdsctl-config-chunks.png " ")
+
+12. Display the sharded database stored in the GDS catalog.
+
+ ```
+
+ gdsctl config sdb
+
+ ```
+
+ ![](images/uds19c_gdsctl-config_sdb_replication_type_dg.png " ")
+
+
+13. Display the user defined sharding table family's root table. All sharded tables are children of this root table. Child tables can also be in hierarchical order from its child tables.
+
+ ```
+
+ gdsctl config table family
+
+ ```
+
+ ![](images/uds19c-gdsctl-config-table-family.png " ")
+
+14. Display the sharding configuration validation result.
+
+ ```
+
+ gdsctl validate
+
+ ```
+
+ ![](images/uds19c-gdsctl-validate.png " ")
+
+15. Exit from gsm1.
+
+16. Visit Lab 5: Sample User-Defined Sharding Schema, Data insertion and Queries.
+
+### Appendix 1
+
+- [Configure the Oracle Sharded Database Topology] ()
+
+* [Oracle Sharding Overview] ()
+
+* [Oracle Sharding Architecture and Concepts] ()
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+
+* **Authors** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff
+* **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Param Saini, Jyoti Verma
+* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database Product Management, Consulting Member of Technical Staff, October 2023
\ No newline at end of file
diff --git a/sharding/uds19c/workshops/desktop/index.html b/sharding/uds19c/workshops/desktop/index.html
new file mode 100644
index 000000000..aaac634be
--- /dev/null
+++ b/sharding/uds19c/workshops/desktop/index.html
@@ -0,0 +1,62 @@
+
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/sharding/uds19c/workshops/desktop/manifest.json b/sharding/uds19c/workshops/desktop/manifest.json
new file mode 100644
index 000000000..3e915dfb1
--- /dev/null
+++ b/sharding/uds19c/workshops/desktop/manifest.json
@@ -0,0 +1,58 @@
+{
+ "workshoptitle": "Learn how to achieve Data Sovereignty with Oracle Globally distributed database 19c",
+ "help": "livelabs-help-db_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "Introduction",
+ "filename": "../../uds19c-intro/uds19c-intro.md"
+ },
+ {
+ "title": "Get Started",
+ "description": "Login to Oracle Cloud",
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
+ },
+ {
+ "title": "Lab 1: Prepare Setup",
+ "description": "How to download your ORM stack and update security rules for an existing VCN",
+ "publisheddate": "09/28/2020",
+ "filename": "../../prepare-setup/uds19c-prepare-setup.md"
+ },
+ {
+ "title": "Lab 2: Environment Setup",
+ "description": "How to provision the workshop environment and connect to it",
+ "publisheddate": "06/30/2020",
+ "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc.md"
+ },
+ {
+ "title": "Lab 3: Initialize Environment",
+ "description": "Initialize Environment",
+ "filename": "../../initialize-environment/uds19c-initialize-environment.md"
+ },
+ {
+ "title": "Lab 4: Explore User-Defined Sharding Topology",
+ "description": "Explore User-Defined Sharding Topology",
+ "filename": "../../uds19c-topology/uds19c-topology.md"
+ },
+ {
+ "title": "Lab 5: Sample User-Defined Sharding Schema and Data insertion",
+ "description": "Sample User-Defined Sharding Schema and Data insertion",
+ "filename": "../../uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md"
+ },
+ {
+ "title": "Lab 6: Sample Queries to validate User-Defined Sharding Schema",
+ "description": "Sample Queries to validate User-Defined Sharding Schema",
+ "filename": "../../uds19c-queries/uds19c-queries.md"
+ },
+ {
+ "title": "Lab 7: Clean up Stack and Instances",
+ "description": "Clean up ORM Stack and instances",
+ "filename": "../../cleanup/uds19c-cleanup.md"
+ },
+ {
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/sharding/uds19c/workshops/livelabs/index.html b/sharding/uds19c/workshops/livelabs/index.html
new file mode 100644
index 000000000..aaac634be
--- /dev/null
+++ b/sharding/uds19c/workshops/livelabs/index.html
@@ -0,0 +1,62 @@
+
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/sharding/uds19c/workshops/livelabs/manifest.json b/sharding/uds19c/workshops/livelabs/manifest.json
new file mode 100644
index 000000000..45c271a09
--- /dev/null
+++ b/sharding/uds19c/workshops/livelabs/manifest.json
@@ -0,0 +1,36 @@
+{
+ "workshoptitle": "Learn how to achieve Data Sovereignty with Oracle Globally distributed database 19c",
+ "help": "livelabs-help-db_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "Introduction",
+ "filename": "../../uds19c-intro/uds19c-intro.md"
+ },
+ {
+ "title": "Lab 1: Verify Environment",
+ "description": "Verify Environment",
+ "filename": "../../initialize-environment/uds19c-initialize-environment-green-box.md"
+ },
+ {
+ "title": "Lab 2: Explore User-Defined Sharding Topology",
+ "description": "Explore User-Defined Sharding Topology",
+ "filename": "../../uds19c-topology/uds19c-topology.md"
+ },
+ {
+ "title": "Lab 3: Sample User-Defined Sharding Schema and Data insertion",
+ "description": "Sample User-Defined Sharding Schema and Data insertion",
+ "filename": "../../uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md"
+ },
+ {
+ "title": "Lab 4: Sample Queries to validate User-Defined Sharding Schema",
+ "description": "Sample Queries to validate User-Defined Sharding Schema",
+ "filename": "../../uds19c-queries/uds19c-queries.md"
+ },
+ {
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/sharding/uds19c/workshops/sandbox/index.html b/sharding/uds19c/workshops/sandbox/index.html
new file mode 100644
index 000000000..aaac634be
--- /dev/null
+++ b/sharding/uds19c/workshops/sandbox/index.html
@@ -0,0 +1,62 @@
+
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/sharding/uds19c/workshops/sandbox/manifest.json b/sharding/uds19c/workshops/sandbox/manifest.json
new file mode 100644
index 000000000..96bf49085
--- /dev/null
+++ b/sharding/uds19c/workshops/sandbox/manifest.json
@@ -0,0 +1,58 @@
+{
+ "workshoptitle": "Learn how to achieve Data Sovereignty with Oracle Globally distributed database 19c",
+ "help": "livelabs-help-db_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "Introduction",
+ "filename": "../../intro/uds19c-intro.md"
+ },
+ {
+ "title": "Get Started",
+ "description": "Login to Oracle Cloud",
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
+ },
+ {
+ "title": "Lab 1: Prepare Setup",
+ "description": "How to download your ORM stack and update security rules for an existing VCN",
+ "publisheddate": "09/28/2020",
+ "filename": "../../prepare-setup/uds19c-prepare-setup.md"
+ },
+ {
+ "title": "Lab 2: Environment Setup",
+ "description": "How to provision the workshop environment and connect to it",
+ "publisheddate": "06/30/2020",
+ "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc.md"
+ },
+ {
+ "title": "Lab 3: Initialize Environment",
+ "description": "Initialize Environment",
+ "filename": "../../initialize-environment/uds19c-initialize-environment.md"
+ },
+ {
+ "title": "Lab 4: Explore User-Defined Sharding Topology",
+ "description": "Explore User-Defined Sharding Topology",
+ "filename": "../../topology/uds19c-topology.md"
+ },
+ {
+ "title": "Lab 5: Sample User-Defined Sharding Schema and Data insertion",
+ "description": "Sample User-Defined Sharding Schema and Data insertion",
+ "filename": "../../uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md"
+ },
+ {
+ "title": "Lab 6: Sample Queries to validate User-Defined Sharding Schema",
+ "description": "Sample Queries to validate User-Defined Sharding Schema",
+ "filename": "../../uds19c-queries/uds19c-queries.md"
+ },
+ {
+ "title": "Lab 7: Clean up Stack and Instances",
+ "description": "Clean up ORM Stack and instances",
+ "filename": "../../cleanup/uds19c-cleanup.md"
+ },
+ {
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/sharding/uds19c/workshops/tenancy/index.html b/sharding/uds19c/workshops/tenancy/index.html
new file mode 100644
index 000000000..aaac634be
--- /dev/null
+++ b/sharding/uds19c/workshops/tenancy/index.html
@@ -0,0 +1,62 @@
+
+
+
+
+
+
+
+
+ Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
Oracle LiveLabs
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/sharding/uds19c/workshops/tenancy/manifest.json b/sharding/uds19c/workshops/tenancy/manifest.json
new file mode 100644
index 000000000..3e915dfb1
--- /dev/null
+++ b/sharding/uds19c/workshops/tenancy/manifest.json
@@ -0,0 +1,58 @@
+{
+ "workshoptitle": "Learn how to achieve Data Sovereignty with Oracle Globally distributed database 19c",
+ "help": "livelabs-help-db_us@oracle.com",
+ "tutorials": [
+ {
+ "title": "Introduction",
+ "description": "Introduction",
+ "filename": "../../uds19c-intro/uds19c-intro.md"
+ },
+ {
+ "title": "Get Started",
+ "description": "Login to Oracle Cloud",
+ "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md"
+ },
+ {
+ "title": "Lab 1: Prepare Setup",
+ "description": "How to download your ORM stack and update security rules for an existing VCN",
+ "publisheddate": "09/28/2020",
+ "filename": "../../prepare-setup/uds19c-prepare-setup.md"
+ },
+ {
+ "title": "Lab 2: Environment Setup",
+ "description": "How to provision the workshop environment and connect to it",
+ "publisheddate": "06/30/2020",
+ "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc.md"
+ },
+ {
+ "title": "Lab 3: Initialize Environment",
+ "description": "Initialize Environment",
+ "filename": "../../initialize-environment/uds19c-initialize-environment.md"
+ },
+ {
+ "title": "Lab 4: Explore User-Defined Sharding Topology",
+ "description": "Explore User-Defined Sharding Topology",
+ "filename": "../../uds19c-topology/uds19c-topology.md"
+ },
+ {
+ "title": "Lab 5: Sample User-Defined Sharding Schema and Data insertion",
+ "description": "Sample User-Defined Sharding Schema and Data insertion",
+ "filename": "../../uds19c-ddl-dml/uds19c-sharded-table-ddls-dmls.md"
+ },
+ {
+ "title": "Lab 6: Sample Queries to validate User-Defined Sharding Schema",
+ "description": "Sample Queries to validate User-Defined Sharding Schema",
+ "filename": "../../uds19c-queries/uds19c-queries.md"
+ },
+ {
+ "title": "Lab 7: Clean up Stack and Instances",
+ "description": "Clean up ORM Stack and instances",
+ "filename": "../../cleanup/uds19c-cleanup.md"
+ },
+ {
+ "title": "Need Help?",
+ "description": "Solutions to Common Problems and Directions for Receiving Live Help",
+ "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-freetier.md"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/timesten/README.md b/timesten/README.md
index d3a8185fc..8596a0715 100644
--- a/timesten/README.md
+++ b/timesten/README.md
@@ -13,13 +13,13 @@ _TimesTen Classic_
A single node database for applications that require the lowest and most consistent response time. High availability is provided via active-standby pair replication to another node, and also supports multiple read-only subscribers for scaling read heavy workloads.
-TimesTen Classic can also be deployed as a cache for Oracle Database. By caching a subset of your Oracle Database data in a TimesTen cache, you can dramatically improve the performance of data access. TimesTen provides a declarative caching mechanism which suports both readonly caching and read-write caching. Data change synchronistion, a standard feature of TimesTen cache, ensures that the cache and the backend database are always in sync.
+TimesTen Classic can also be deployed as a cache for Oracle Database. By caching a subset of your Oracle Database data in a TimesTen cache, you can dramatically improve the performance of data access. TimesTen provides a declarative caching mechanism which suports both readonly caching and read-write caching. Data change synchronization, a standard feature of TimesTen cache, ensures that the cache and the backend database are always in sync.
_TimesTen Scaleout_
A shared nothing distributed database based on the existing TimesTen in-memory technology. TimesTen Scaleout allows databases to transparently scale across dozens of hosts, reach hundreds of terabytes in size and support hundreds of millions of transactions per second without the need for manual database sharding or workload partitioning. Scaleout features include concurrent parallel cross-node processing, transparent data distribution (with single database image) and elastic scaleout and scalein. High availability and fault tolerance are automatically provided through use of Scaleout's K-safety feature. TimesTen Scaleout supports most of the same features and APIs as TimesTen Classic.
-TimesTen Scaleout can also be deployed as a cache for Oracle Database, supporting a subset of the cache features of Timesten Classic.
+TimesTen Scaleout can also be deployed as a cache for Oracle Database, supporting a subset of the cache features of TimesTen Classic.
## How do I get started with TimesTen LiveLabs?
@@ -44,7 +44,7 @@ The TimesTen workshops have the following pre-requisites:
- [Accelerate your Applications: Achieve Blazing Fast SQL With an Oracle TimesTen Cache](https://apexapps.oracle.com/pls/apex/dbpm/r/livelabs/view-workshop?wid=3282)
## TimesTen Related Pages
-- [TimesTen Product Home](https://www.oracle.com/au/application-development/)
+- [TimesTen Product Home](https://www.oracle.com/database/technologies/related/timesten.html)
- [TimesTen Samples on GitHub](https://github.com/oracle-samples/oracle-timesten-samples)
- [TimesTen Blogs](https://blogs.oracle.com/timesten/)
diff --git a/timesten/cache-introduction/02-prepare-setup/prepare-setup.md b/timesten/cache-introduction/02-prepare-setup/prepare-setup.md
index ab3f67082..452a7e656 100644
--- a/timesten/cache-introduction/02-prepare-setup/prepare-setup.md
+++ b/timesten/cache-introduction/02-prepare-setup/prepare-setup.md
@@ -21,7 +21,7 @@ This lab assumes you have:
1. Click on the link below to download the Resource Manager zip file you need to build your environment:
- [ll-timesten-cache-intro.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/zXF3WR--V6CG3ZmB1vgQcEcYYidDhuejeplM9oBUwiYGs-7BnN4YI2_TLVY82_-b/n/natdsecurity/b/stack/o/ll-timesten-cache-intro.zip)
+ [ll-timesten-cache-intro.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/ll-timesten-cache-intro.zip)
2. Save in your downloads folder.
@@ -84,4 +84,4 @@ This workshop requires a certain number of ports to be available, a requirement
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Jenny Bloom, March 2023
+* **Last Updated By/Date** - Jenny Bloom, November 2023
diff --git a/timesten/cache-introduction/04-initialize-environment/initialize-environment.md b/timesten/cache-introduction/04-initialize-environment/initialize-environment.md
index e0a8a27e2..5597b4d71 100644
--- a/timesten/cache-introduction/04-initialize-environment/initialize-environment.md
+++ b/timesten/cache-introduction/04-initialize-environment/initialize-environment.md
@@ -17,7 +17,7 @@ The workshop uses an Oracle database which runs in its own container (**dbhost**
This lab assumes that you:
- Have completed all the previous labs in this workshop, in sequence.
-- Have an open terminal session in the workshop compute instance, either via NoVNC or SSH.
+- Have an open terminal session in the workshop compute instance, either via NoVNC or SSH. Use **oracle** (in lowercase) as the user.
### Start over from the beginning
@@ -58,5 +58,5 @@ Keep your terminal session open ready for the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/05-create-instance/create-instance.md b/timesten/cache-introduction/05-create-instance/create-instance.md
index 5c889eff6..0c4679e84 100644
--- a/timesten/cache-introduction/05-create-instance/create-instance.md
+++ b/timesten/cache-introduction/05-create-instance/create-instance.md
@@ -43,16 +43,17 @@ ls -l
```
total 16
-drwxr-xr-x. 2 oracle oinstall 22 May 26 13:10 bin
-drwxr-xr-x. 2 oracle oinstall 4096 May 26 13:10 queries
-drwxr-xr-x. 2 oracle oinstall 4096 May 26 13:10 scripts
--rw-r--r--. 1 oracle oinstall 316 May 10 12:55 tables_appuser.sql
--rw-r--r--. 1 oracle oinstall 3879 May 10 14:31 tables_oe.sql
+drwxr-xr-x. 2 oracle oinstall 97 Oct 18 15:33 bin
+drwxr-xr-x. 2 oracle oinstall 4096 Oct 18 15:33 extras
+drwxr-xr-x. 2 oracle oinstall 102 Oct 18 15:33 queries
+drwxr-xr-x. 2 oracle oinstall 4096 Oct 18 15:33 scripts
+-rw-r--r--. 1 oracle oinstall 741 Jun 7 2022 tables_appuser.sql
+-rw-r--r--. 1 oracle oinstall 3879 May 10 2022 tables_oe.sql
```
## Task 2: Create a TimesTen instance
-A TimesTen _installation_ is comprised of the TimesTen software components. An installation is created by unzipping the TimesTen software distribution media into a suitable location. For this workshop, the TimesTen software distribution media has already been unzipped into the directory **/shared/sw** to create a TimesTen installation named **tt22.1.1.7.0**.
+A TimesTen _installation_ is comprised of the TimesTen software components. An installation is created by unzipping the TimesTen software distribution media into a suitable location. For this workshop, the TimesTen software distribution media has already been unzipped into the directory **/shared/sw** to create a TimesTen installation named **tt22.1.1.18.0**.
1. List the top level software directory.
@@ -64,35 +65,35 @@ ls -l /shared/sw
```
total 0
-dr-xr-x---. 17 oracle oinstall 277 May 5 22:20 tt22.1.1.7.0
+dr-xr-x---. 17 oracle oinstall 277 May 5 22:20 tt22.1.1.18.0
```
2. List the contents of the TimesTen installation top level directory.
```
-ls -l /shared/sw/tt22.1.1.7.0
+ls -l /shared/sw/tt22.1.1.18.0
```
```
-total 108
-dr-xr-x---. 3 oracle oinstall 89 May 5 22:20 3rdparty
-dr-xr-x---. 2 oracle oinstall 4096 May 5 22:19 bin
-dr-xr-x---. 4 oracle oinstall 31 May 5 22:19 grid
-dr-xr-x---. 3 oracle oinstall 240 May 5 22:19 include
-dr-xr-x---. 2 oracle oinstall 167 May 5 22:19 info
-dr-xr-x---. 2 oracle oinstall 26 May 5 22:19 kubernetes
-dr-xr-x---. 3 oracle oinstall 4096 May 5 22:19 lib
-dr-xr-x---. 3 oracle oinstall 19 May 5 22:19 network
-dr-xr-x---. 3 oracle oinstall 18 May 5 22:19 nls
-dr-xr-x---. 2 oracle oinstall 242 May 5 22:19 oraclescripts
-dr-xr-x---. 4 oracle oinstall 40 May 5 22:20 PERL
-dr-xr-x---. 7 oracle oinstall 68 May 5 22:19 plsql
--r--r-----. 1 oracle oinstall 99660 May 5 22:19 README.html
-dr-xr-x---. 2 oracle oinstall 54 May 5 22:19 startup
-dr-xr-x---. 2 oracle oinstall 90 May 5 22:19 support
-dr-xr-x---. 3 oracle oinstall 54 May 5 22:20 ttoracle_home
+total 244
+dr-xr-x---. 3 oracle oinstall 89 Sep 7 17:47 3rdparty
+dr-xr-x---. 2 oracle oinstall 4096 Sep 7 17:47 bin
+dr-xr-x---. 4 oracle oinstall 31 Sep 7 17:47 grid
+dr-xr-x---. 3 oracle oinstall 240 Sep 7 17:47 include
+dr-xr-x---. 2 oracle oinstall 167 Sep 7 17:47 info
+dr-xr-x---. 2 oracle oinstall 26 Sep 7 17:47 kubernetes
+dr-xr-x---. 3 oracle oinstall 4096 Sep 7 17:47 lib
+dr-xr-x---. 3 oracle oinstall 19 Sep 7 17:47 network
+dr-xr-x---. 3 oracle oinstall 18 Sep 7 17:47 nls
+dr-xr-x---. 2 oracle oinstall 274 Sep 7 17:47 oraclescripts
+dr-xr-x---. 4 oracle oinstall 40 Sep 7 17:47 PERL
+dr-xr-x---. 7 oracle oinstall 68 Sep 7 17:47 plsql
+-r--r-----. 1 oracle oinstall 241352 Sep 7 17:47 README.html
+dr-xr-x---. 2 oracle oinstall 54 Sep 7 17:47 startup
+dr-xr-x---. 2 oracle oinstall 103 Sep 7 17:47 support
+dr-xr-x---. 3 oracle oinstall 54 Sep 7 17:47 ttoracle_home
```
@@ -104,7 +105,7 @@ When it is operational, a TimesTen instance also includes a set of associated pr
```
-/shared/sw/tt22.1.1.7.0/bin/ttInstanceCreate -location /tt/inst -name ttinst -tnsadmin /shared/tnsadmin
+/shared/sw/tt22.1.1.18.0/bin/ttInstanceCreate -location /tt/inst -name ttinst -tnsadmin /shared/tnsadmin
```
@@ -121,7 +122,7 @@ Run the 'setuproot' script :
This will move the TimesTen startup script into its appropriate location.
The 22.1 Release Notes are located here :
- '/shared/sw/tt22.1.1.7.0/README.html'
+ '/shared/sw/tt22.1.1.18.0/README.html'
Instance created successfully.
@@ -223,7 +224,7 @@ ttVersion
```
```
-TimesTen Release 22.1.1.7.0 (64 bit Linux/x86_64) (ttinst:6624) 2022-05-05T19:45:28Z
+TimesTen Release 22.1.1.18.0 (64 bit Linux/x86_64) (ttinst:6624) 2023-09-07T15:13:39Z
Instance admin: oracle
Instance home directory: /tt/inst/ttinst
Group owner: oinstall
@@ -260,5 +261,5 @@ Keep your terminal session to tthost1 open ready for the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Jenny Bloom, June 2023
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/06-prepare-oracle/prepare-oracle.md b/timesten/cache-introduction/06-prepare-oracle/prepare-oracle.md
index 410566e70..2c47fe428 100644
--- a/timesten/cache-introduction/06-prepare-oracle/prepare-oracle.md
+++ b/timesten/cache-introduction/06-prepare-oracle/prepare-oracle.md
@@ -92,7 +92,10 @@ CREATE TABLE order_items
CREATE UNIQUE INDEX order_items_uk
ON order_items (order_id, product_id) ;
+
+ ...
```
+Press the space bar to scroll through the entire **tables\_oe.sql** file.
Just for information, here are the entity-relationship diagrams showing the relationships between the tables.
@@ -118,10 +121,10 @@ sqlplus sys/RedMan99@orclpdb1 as sysdba
```
-SQL*Plus: Release 19.0.0.0.0 - Production on Tue Jun 21 10:04:41 2022
-Version 19.14.0.0.0
+SQL*Plus: Release 19.0.0.0.0 - Production on Wed Oct 11 18:24:46 2023
+Version 19.19.0.0.0
-Copyright (c) 1982, 2021, Oracle. All rights reserved.
+Copyright (c) 1982, 2022, Oracle. All rights reserved.
Connected to:
@@ -157,27 +160,16 @@ CREATE USER ttcacheadm IDENTIFIED BY ttcacheadm DEFAULT TABLESPACE cachetblsp QU
User created.
```
-4. Grant CREATE SESSION privilege to the user:
-
-```
-
-GRANT CREATE SESSION TO ttcacheadm;
-
-```
-
-```
-Grant succeeded.
-```
## Task 3: Grant required roles and privileges to the cache admin user
-The cache admin user needs various privileges in the Oracle database. In order to simplify granting these, TimesTen includes a SQL script (**\$TIMESTEN_HOME/install/oraclescripts/grantCacheAdminPrivileges.sql**) that can be run to grant them.
+The cache admin user needs various privileges in the Oracle database. In order to simplify granting these, TimesTen includes a SQL script (**$TIMESTEN_HOME/install/oraclescripts/grantCacheAdminPrivileges.sql**) that can be run to grant them.
Run that script in your SQL\*Plus session, passing it the cache admin username (ttcacheadm):
```
-@/tt/inst/ttinst/install/oraclescripts/grantCacheAdminPrivileges.sql ttcacheadm
+@$TIMESTEN_HOME/install/oraclescripts/grantCacheAdminPrivileges.sql ttcacheadm
```
@@ -269,5 +261,5 @@ Keep your terminal session to tthost1 open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/07-prepare-cache/prepare-cache.md b/timesten/cache-introduction/07-prepare-cache/prepare-cache.md
index 3e40be238..14bca3d21 100644
--- a/timesten/cache-introduction/07-prepare-cache/prepare-cache.md
+++ b/timesten/cache-introduction/07-prepare-cache/prepare-cache.md
@@ -29,7 +29,7 @@ This lab assumes that you:
- Cache operations act on cache groups not on individual tables, or on cache instances as opposed to individual rows.
-- Normal SQL operations, such as SELECT, INSERT, UPDATE and DELETE, operate directly on the cache tables and the rows therein.
+- Normal SQL operations, such as SELECT, INSERT, UPDATE and DELETE, operate directly on the cache tables and the rows therein. For this lab where READONLY cache groups are deployed, only SELECT operations are applicable to read from the cache tables.
## Task 1: Create the TimesTen database and prepare it for caching
@@ -57,24 +57,25 @@ connect "DSN=sampledb";
Connection successful: DSN=sampledb;UID=oracle;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
(Default setting AutoCommit=1)
```
-2. Set the Oracle cache administrator username and password:
+
+2. Create a cache administrator username and password in TimesTen and grant the necessary privileges to manage cache groups:
```
-call ttCacheUidPwdSet('ttcacheadm','ttcacheadm');
+CREATE USER ttcacheadm IDENTIFIED BY ttcacheadm;
```
-The credentials are stored, encrypted, in the TimesTen database.
-
-3. Start the TimesTen cache agent for the cache database:
-
+```
+User created.
+```
```
-call ttCacheStart;
+GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE, CREATE ANY INDEX, ALTER ANY TABLE, SELECT ANY TABLE TO ttcacheadm;
```
+The password for the cache admin user for TimesTen can be different than the one in Oracle. For simplicity, the same password is used for this user in TimesTen and Oracle.
-4. Create the application users for the **OE** and **APPUSER** schemas and grant them some necessary privileges:
+3. Create the application users for the **OE** and **APPUSER** schemas and grant them some necessary privileges:
```
@@ -88,7 +89,7 @@ User created.
```
-GRANT CREATE SESSION, CREATE CACHE GROUP, CREATE TABLE TO oe;
+GRANT CREATE SESSION TO oe;
```
@@ -104,12 +105,9 @@ User created.
```
-GRANT CREATE SESSION, CREATE CACHE GROUP, CREATE TABLE TO appuser;
+GRANT CREATE SESSION TO appuser;
```
-
-5. Exit from ttIsql:
-
```
quit
@@ -120,30 +118,46 @@ quit
Disconnecting...
Done.
```
+4. Connect as user, ttcacheadm, to start up cache agent.
-## Task 2: Create the cache groups
-
-Create the (multiple) cache groups for the **OE** schema tables. To reduce typing and copy/pasting, this lab uses a pre-prepared script to create the cache groups.
-
-1. Use **ttIsql** to connect to the TimesTen cache as the **OE** user:
```
-ttIsql "DSN=sampledb;UID=oe;PWD=oe;OraclePWD=oe"
+ttIsql "dsn=sampledb;uid=ttcacheadm;pwd=ttcacheadm;OraclePWD=ttcacheadm"
```
-
```
-Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved.
+Copyright (c) 1996, 2023, Oracle and/or its affiliates. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
-connect "DSN=sampledb;UID=oe;PWD=********;OraclePWD=********";
-Connection successful: DSN=sampledb;UID=oe;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
+connect "dsn=sampledb;uid=ttcacheadm;pwd=********;OraclePWD=********";
+Connection successful: DSN=sampledb;UID=ttcacheadm;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
(Default setting AutoCommit=1)
-Command>
```
-2. Run the script to create the OE cache groups:
+5. Set the Oracle cache administrator username and password:
+
+```
+
+call ttCacheUidPwdSet('ttcacheadm','ttcacheadm');
+
+```
+The credentials are stored, encrypted, in the TimesTen database.
+
+6. Start the TimesTen cache agent for the cache database:
+
+```
+
+call ttCacheStart;
+
+```
+
+
+## Task 2: Create the cache groups
+
+Create the (multiple) cache groups for the **OE** schema tables. To reduce typing and copy/pasting, this lab uses a pre-prepared script to create the cache groups.
+
+1. Run the script to create the cache groups with tables owned by OE:
```
@@ -157,16 +171,17 @@ Command>
-- no child tables.
--
-CREATE READONLY CACHE GROUP oe.cg_promotions
+CREATE READONLY CACHE GROUP ttcacheadm.cg_promotions
AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS
STATE PAUSED
FROM
-oe.promotions
+oe.promotions
( promo_id NUMBER(6)
, promo_name VARCHAR2(20)
, PRIMARY KEY (promo_id)
);
+
…
--
@@ -179,16 +194,16 @@ CREATE INDEX oe.order_items_fk
```
-3. Display the cachegroups owned by the OE user:
+2. Display the created cache groups, note the cache groups are owned by TTCACHEADM user but the cache tables are owned by OE user:
```
-cachegroups oe.%;
+cachegroups;
```
```
-Cache Group OE.CG_CUST_ORDERS:
+Cache Group TTCACHEADM.CG_CUST_ORDERS:
Cache Group Type: Read Only
Autorefresh: Yes
@@ -209,7 +224,7 @@ Cache Group OE.CG_CUST_ORDERS:
Child Table: OE.ORDER_ITEMS
Table Type: Read Only
-Cache Group OE.CG_PROD_INVENTORY:
+Cache Group TTCACHEADM.CG_PROD_INVENTORY:
Cache Group Type: Read Only
Autorefresh: Yes
@@ -230,7 +245,7 @@ Cache Group OE.CG_PROD_INVENTORY:
Child Table: OE.INVENTORIES
Table Type: Read Only
-Cache Group OE.CG_PROMOTIONS:
+Cache Group TTCACHEADM.CG_PROMOTIONS:
Cache Group Type: Read Only
Autorefresh: Yes
@@ -245,12 +260,12 @@ Cache Group OE.CG_PROMOTIONS:
3 cache groups found.
```
+There are 3 cache groups for tables owned by OE user.
-4. Display the tables owned by the OE user. These are the tables that make up the cache groups:
-
+3. Display the tables owned by the OE user. These are the tables that make up the cache groups:
```
-tables;
+alltables oe.%;
```
@@ -264,10 +279,9 @@ tables;
OE.PROMOTIONS
7 tables found.
```
-
```
-select count(*) from customers;
+select count(*) from oe.customers;
```
@@ -278,7 +292,7 @@ select count(*) from customers;
```
-select count(*) from product_information;
+select count(*) from oe.product_information;
```
@@ -289,7 +303,7 @@ select count(*) from product_information;
```
-select count(*) from promotions;
+select count(*) from oe.promotions;
```
@@ -297,51 +311,19 @@ select count(*) from promotions;
< 0 >
1 row found.
```
-
-5. Exit from ttIsql:
-
-```
-
-quit
-
-```
-
-```
-Disconnecting...
-Done.
-```
-
-The user OE has 3 cache groups, some containing single tables and others containing multiple tables. Currently, all the tables are empty.
+Currently, all the tables are empty.
Create the cache group for the **APPUSER.VPN\_USERS** table. This time you will type, or copy/paste, the individual commands.
-6. Connect to the cache as the user **appuser**:
+4. Create the cache group with table owned by APPUSER.
```
-ttIsql "DSN=sampledb;UID=appuser;PWD=appuser;OraclePWD=appuser"
-
-```
-
-```
-Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved.
-Type ? or "help" for help, type "exit" to quit ttIsql.
-
-connect "DSN=sampledb;UID=appuser;PWD=********;OraclePWD=********";
-Connection successful: DSN=sampledb;UID=appuser;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
-(Default setting AutoCommit=1)
-Command>
-```
-
-7. Create the cache group.
-
-```
-
-CREATE READONLY CACHE GROUP appuser.cg_vpn_users
+CREATE READONLY CACHE GROUP ttcacheadm.cg_vpn_users
AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS
STATE PAUSED
FROM
-vpn_users
+appuser.vpn_users
( vpn_id NUMBER(5) NOT NULL
, vpn_nb NUMBER(5) NOT NULL
, directory_nb CHAR(10 BYTE) NOT NULL
@@ -352,16 +334,16 @@ vpn_users
```
-8. Display the cachegroup and table:
+5. Display the cachegroup and table:
```
-cachegroups appuser.%;
+cachegroups cg_vpn_users;
```
```
-Cache Group APPUSER.CG_VPN_USERS:
+Cache Group TTCACHEADM.CG_VPN_USERS:
Cache Group Type: Read Only
Autorefresh: Yes
@@ -376,10 +358,9 @@ Cache Group APPUSER.CG_VPN_USERS:
1 cache group found.
```
-
```
-tables;
+alltables appuser.%;
```
@@ -390,7 +371,7 @@ tables;
```
-select count(*) from vpn_users;
+select count(*) from appuser.vpn_users;
```
@@ -398,15 +379,13 @@ select count(*) from vpn_users;
< 0 >
1 row found.
```
-
-9. Exit from ttIsql:
+6. Exit from ttIsql:
```
quit
```
-
```
Disconnecting...
Done.
@@ -417,15 +396,15 @@ The TimesTen mechanism that captures data changes that occur in the Oracle datab
```
Autorefresh State: Paused
```
-In order to pre-populate the cache tables and activate the AUTOREFRESH mechanism you must load the cache groups.
+In order to pre-populate the cache tables and activate the AUTOREFRESH mechanism you must load the cache groups.
You can now **proceed to the next lab**.
Keep your terminal session to tthost1 open for use in the next lab.
-## Acknowledgements
+## AcknowledgeEments
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/08-load-cache/load-cache.md b/timesten/cache-introduction/08-load-cache/load-cache.md
index 251b5f019..0868f695d 100644
--- a/timesten/cache-introduction/08-load-cache/load-cache.md
+++ b/timesten/cache-introduction/08-load-cache/load-cache.md
@@ -8,7 +8,7 @@ In this lab, you will load data from the Oracle tables into the TimesTen cache t
### Objectives
-- Load the APPUSER and OE cache groups.
+- Load cache groups for APPUSER and OE cache tables.
This task is accomplished using SQL statements, so can be easily performed from application code if required.
@@ -25,22 +25,22 @@ As you saw in the previous lab, when a READONLY cache group is first created its
Loading the cache group populates the cache tables with the data from the Oracle database and also activates the AUTOREFRESH mechanism. The load occurs in such a manner that if any changes occur to the data in the Oracle database while the load is in progress, those changes will be captured. The captured changes are then autorefreshed to TimesTen once the load is completed.
-Load the APPUSER.CG\_VPN\_USERS cache group (1 million rows) and then examine the cache group and table.
+Load the CG\_VPN\_USERS cache group (1 million rows) and then examine the cache group and table.
-1. Connect to the cache as the user **appuser**:
+1. Connect to the cache as the user **ttcacheadm**:
```
-ttIsql "DSN=sampledb;UID=appuser;PWD=appuser;OraclePWD=appuser"
+ttIsql "dsn=sampledb;uid=ttcacheadm;pwd=ttcacheadm;OraclePWD=ttcacheadm"
```
```
-Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved.
+Copyright (c) 1996, 2023, Oracle and/or its affiliates. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
-connect "DSN=sampledb;UID=appuser;PWD=********;OraclePWD=********";
-Connection successful: DSN=sampledb;UID=appuser;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
+connect "dsn=sampledb;uid=ttcacheadm;pwd=********;OraclePWD=********";
+Connection successful: DSN=sampledb;UID=ttcacheadm;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
(Default setting AutoCommit=1)
Command>
```
@@ -49,7 +49,7 @@ Command>
```
-LOAD CACHE GROUP appuser.cg_vpn_users COMMIT EVERY 1024 ROWS;
+LOAD CACHE GROUP cg_vpn_users COMMIT EVERY 1024 ROWS;
```
@@ -66,7 +66,7 @@ cachegroups cg_vpn_users;
```
```
-Cache Group APPUSER.CG_VPN_USERS:
+Cache Group TTCACHEADM.CG_VPN_USERS:
Cache Group Type: Read Only
Autorefresh: Yes
@@ -84,25 +84,11 @@ Cache Group APPUSER.CG_VPN_USERS:
Note that the state of autorefresh has now changed to **On**.
-
-4. Display the cache group tables:
+4. Check the row count of the cache table:
```
-tables;
-
-```
-
-```
- APPUSER.VPN_USERS
-1 table found.
-```
-
-5. Check the row count:
-
-```
-
-select count(*) from vpn_users;
+select count(*) from appuser.vpn_users;
```
@@ -111,55 +97,24 @@ select count(*) from vpn_users;
1 row found.
```
-6. Update optimizer statistics for all the tables in the APPUSER schema:
-
-```
-
-statsupdate;
-
-```
-
-7. Exit from ttIsql:
+5. Update optimizer statistics on appuser.vpn_users table:
```
-quit
+statsupdate appuser.vpn_users;
```
-```
-Disconnecting...
-Done.
-```
-
## Task 2: Load the OE cache groups
-Now do the same for the OE schema cache groups.
+Now do the same for the cache groups on the OE cache tables.
-1. Connect to the cache as the OE user:
+1. Load the CG\_PROMOTIONS cache group:
```
-ttIsql "DSN=sampledb;UID=oe;PWD=oe;OraclePWD=oe"
-
-```
-
-```
-Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved.
-Type ? or "help" for help, type "exit" to quit ttIsql.
-
-connect "DSN=sampledb;UID=oe;PWD=********;OraclePWD=********";
-Connection successful: DSN=sampledb;UID=oe;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
-(Default setting AutoCommit=1)
-Command>
-```
-
-2. Load the CG\_PROMOTIONS cache group:
-
-```
-
-LOAD CACHE GROUP oe.cg_promotions COMMIT EVERY 1024 ROWS;
+LOAD CACHE GROUP cg_promotions COMMIT EVERY 1024 ROWS;
```
@@ -167,11 +122,11 @@ LOAD CACHE GROUP oe.cg_promotions COMMIT EVERY 1024 ROWS;
2 cache instances affected.
```
-3. Load the CG\_PROD\_INVENTORY cache group:
+2. Load the CG\_PROD\_INVENTORY cache group:
```
-LOAD CACHE GROUP oe.cg_prod_inventory COMMIT EVERY 1024 ROWS;
+LOAD CACHE GROUP cg_prod_inventory COMMIT EVERY 1024 ROWS;
```
@@ -179,11 +134,11 @@ LOAD CACHE GROUP oe.cg_prod_inventory COMMIT EVERY 1024 ROWS;
288 cache instances affected.
```
-4. Load the CG\_CUST\_ORDERS cache group:
+3. Load the CG\_CUST\_ORDERS cache group:
```
-LOAD CACHE GROUP oe.cg_cust_orders COMMIT EVERY 1024 ROWS;
+LOAD CACHE GROUP cg_cust_orders COMMIT EVERY 1024 ROWS;
```
@@ -195,34 +150,21 @@ LOAD CACHE GROUP oe.cg_cust_orders COMMIT EVERY 1024 ROWS;
```
-statsupdate;
-
-```
-
-6. Display the cache group tables:
-
-```
-
-tables;
+statsupdate oe.customers;
+statsupdate oe.inventories;
+statsupdate oe.orders;
+statsupdate oe.order_items;
+statsupdate oe.product_descriptions;
+statsupdate oe.product_information;
+statsupdate oe.promotions;
```
-```
- OE.CUSTOMERS
- OE.INVENTORIES
- OE.ORDERS
- OE.ORDER_ITEMS
- OE.PRODUCT_DESCRIPTIONS
- OE.PRODUCT_INFORMATION
- OE.PROMOTIONS
-7 tables found.
-```
-
-7. Check the row count for CUSTOMERS:
+6. Check the row count for oe.CUSTOMERS table:
```
-select count(*) from customers;
+select count(*) from oe.customers;
```
@@ -231,11 +173,11 @@ select count(*) from customers;
1 row found.
```
-8. Check the row count for INVENTORIES:
+7. Check the row count for oe.INVENTORIES table:
```
-select count(*) from inventories;
+select count(*) from oe.inventories;
```
@@ -244,11 +186,11 @@ select count(*) from inventories;
1 row found.
```
-9. Check the row count for ORDERS:
+8. Check the row count for oe.ORDERS table:
```
-select count(*) from orders;
+select count(*) from oe.orders;
```
@@ -257,11 +199,11 @@ select count(*) from orders;
1 row found.
```
-10. Check the row count for ORDER\_ITEMS:
+9. Check the row count for oe.ORDER\_ITEMS table:
```
-select count(*) from order_items;
+select count(*) from oe.order_items;
```
@@ -270,11 +212,11 @@ select count(*) from order_items;
1 row found.
```
-11. Check the row count for PRODUCT\_DESCRIPTIONS:
+10. Check the row count for oe.PRODUCT\_DESCRIPTIONS table:
```
-select count(*) from product_descriptions;
+select count(*) from oe.product_descriptions;
```
@@ -283,11 +225,11 @@ select count(*) from product_descriptions;
1 row found.
```
-12. Check the row count for PRODUCT\_INFORMATION:
+11. Check the row count for oe.PRODUCT\_INFORMATION table:
```
-select count(*) from product_information;
+select count(*) from oe.product_information;
```
@@ -296,11 +238,11 @@ select count(*) from product_information;
1 row found.
```
-13. Check the row count for PROMOTIONS:
+12. Check the row count for oe.PROMOTIONS table:
```
-select count(*) from promotions;
+select count(*) from oe.promotions;
```
@@ -309,7 +251,7 @@ select count(*) from promotions;
1 row found.
```
-14. Exit from ttIsql:
+13. Exit from ttIsql:
```
@@ -330,5 +272,5 @@ Keep your terminal session to tthost1 open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/09-cache-refresh/cache-refresh.md b/timesten/cache-introduction/09-cache-refresh/cache-refresh.md
index f8a0e86c1..92972b433 100644
--- a/timesten/cache-introduction/09-cache-refresh/cache-refresh.md
+++ b/timesten/cache-introduction/09-cache-refresh/cache-refresh.md
@@ -76,12 +76,12 @@ sqlplus oe/oe@orclpdb1
```
```
-SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 15 14:39:50 2022
-Version 19.14.0.0.0
+SQL*Plus: Release 19.0.0.0.0 - Production on Wed Oct 11 21:56:56 2023
+Version 19.19.0.0.0
-Copyright (c) 1982, 2021, Oracle. All rights reserved.
+Copyright (c) 1982, 2022, Oracle. All rights reserved.
-Last Successful login time: Tue May 10 2022 14:18:58 +00:00
+Last Successful login time: Mon Jan 09 2023 11:30:47 +00:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
@@ -310,5 +310,5 @@ Keep your primary session open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/10-oltp-performance/oltp-performance.md b/timesten/cache-introduction/10-oltp-performance/oltp-performance.md
index 3a68e8da2..78709b4a2 100644
--- a/timesten/cache-introduction/10-oltp-performance/oltp-performance.md
+++ b/timesten/cache-introduction/10-oltp-performance/oltp-performance.md
@@ -8,6 +8,8 @@ You will use a standard TimesTen benchmark program, tptbmOCI, that connects to t
**Estimated Lab Time:** 6 minutes
+**IMPORTANT:** The following information is presented for you to understand the different pieces around the benchmark for this lab. You don't need to run any commands listed below. The commands to run are from Task 1.
+
There is a single table used for this benchmark, APPUSER.VPN_USERS. Here is the definition of the table as it exists in the Oracle database:
```
@@ -199,5 +201,5 @@ Keep your primary session open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/11-query-performance/query-performance.md b/timesten/cache-introduction/11-query-performance/query-performance.md
index 3aebb5568..24219a6b6 100644
--- a/timesten/cache-introduction/11-query-performance/query-performance.md
+++ b/timesten/cache-introduction/11-query-performance/query-performance.md
@@ -6,7 +6,7 @@ In this lab, you run queries against the TimesTen cache and against the Oracle d
**Estimated Lab Time:** 10 minutes
-**IMPORTANT:** As noted in the previous lab, there are many factors that can affect performance. As a result, the performance numbers shown in this lab are indicative only. The numbers that _you_ measure will differ and may be slightly better or slightly worse.
+**IMPORTANT:** As noted in the previous lab, there are many factors that can affect performance. As a result, the performance numbers shown in this lab are indicative only. The numbers that _you_ measure will differ and may be slightly better or slightly worse. The following information is presented for you to understand the different pieces around the benchmark for this lab. You don't need to run any commands listed below. The commands to run are from Task 1.
When timing database query execution, it is important to understand what you are timing. Otherwise, you may get misleading results.
@@ -269,7 +269,7 @@ Now run the queries against the TimesTen cache:
```
```
-info: connected to 'sampledb' (Oracle TimesTen IMDB version 22.1.1.7.0)
+info: connected to 'sampledb' (Oracle TimesTen IMDB version 22.1.1.18.0)
info: running queries from file 'queries/query_all.sql'
info: ========================================
info: executing query #1
@@ -355,5 +355,5 @@ Keep your primary session open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/12-dynamic-caching/dynamic-caching.md b/timesten/cache-introduction/12-dynamic-caching/dynamic-caching.md
index 555975f35..7520e6d38 100644
--- a/timesten/cache-introduction/12-dynamic-caching/dynamic-caching.md
+++ b/timesten/cache-introduction/12-dynamic-caching/dynamic-caching.md
@@ -31,20 +31,20 @@ This lab assumes that you:
## Task 1: Create a DYNAMIC READONLY cache group
-1. Use ttIsql to connect to the cache as the APPUSER user:
+1. Use ttIsql to connect to the cache as the TTCACHEADM user:
```
-ttIsql "DSN=sampledb;UID=appuser;PWD=appuser;OraclePWD=appuser"
+ttIsql "dsn=sampledb;uid=ttcacheadm;pwd=ttcacheadm;OraclePWD=ttcacheadm"
```
```
-Copyright (c) 1996, 2022, Oracle and/or its affiliates. All rights reserved.
+Copyright (c) 1996, 2023, Oracle and/or its affiliates. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
-connect "DSN=sampledb;UID=appuser;PWD=********;OraclePWD=********";
-Connection successful: DSN=sampledb;UID=appuser;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
+connect "dsn=sampledb;uid=ttcacheadm;pwd=********;OraclePWD=********";
+Connection successful: DSN=sampledb;UID=ttcacheadm;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
(Default setting AutoCommit=1)
Command>
```
@@ -52,16 +52,15 @@ Command>
```
-CREATE DYNAMIC READONLY CACHE GROUP appuser.cg_parent_child
-AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS
-STATE ON
+CREATE DYNAMIC READONLY CACHE GROUP ttcacheadm.cg_parent_child
+AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS
FROM
-parent
+appuser.parent
( parent_id NUMBER(8) NOT NULL
, parent_c1 VARCHAR2(20 BYTE)
, PRIMARY KEY (parent_id)
),
-child
+appuser.child
( child_id NUMBER(8) NOT NULL
, parent_id NUMBER(8) NOT NULL
, child_c1 VARCHAR2(20 BYTE)
@@ -82,12 +81,12 @@ cachegroups cg_parent_child;
```
```
-Cache Group APPUSER.CG_PARENT_CHILD:
+Cache Group TTCACHEADM.CG_PARENT_CHILD:
Cache Group Type: Read Only (Dynamic)
Autorefresh: Yes
Autorefresh Mode: Incremental
- Autorefresh State: On
+ Autorefresh State: Paused
Autorefresh Interval: 2 Seconds
Autorefresh Status: ok
Aging: LRU on
@@ -100,19 +99,51 @@ Cache Group APPUSER.CG_PARENT_CHILD:
Table Type: Read Only
```
-Note that in this case the Autorefresh state is immediately set to _on_.
+Note the Autorefresh state is _Paused_.
```
-Autorefresh State: On
+Autorefresh State: Paused
```
-This is because you do not need to perform an initial load for a dynamic cache group (though it is possible to do so).
+Unlike static cache group, you do not need to perform an initial load for a dynamic cache group (though it is possible to do so). The Autofresh state is changed immediately from _Paused_ to _On_ after the first on-demand dynamic load operation.
-4. Check the state of the tables:
+4. Disconnect from the cache as TTCACHEADM user.
```
-select count(*) from appuser.parent;
+quit
+
+```
+
+```
+Disconnecting...
+Done.
+```
+
+
+## Task 2: Run some queries and observe the behavior
+
+1. Use ttIsql to connect to the cache as the APPUSER user:
+```
+
+ttIsql "dsn=sampledb;uid=appuser;pwd=appuser;OraclePwd=appuser"
+
+```
+```
+Copyright (c) 1996, 2023, Oracle and/or its affiliates. All rights reserved.
+Type ? or "help" for help, type "exit" to quit ttIsql.
+
+connect "dsn=sampledb;uid=appuser;pwd=********;OraclePwd=********";
+Connection successful: DSN=sampledb;UID=appuser;DataStore=/tt/db/sampledb;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=AL32UTF8;LogFileSize=256;LogBufMB=256;PermSize=1024;TempSize=256;OracleNetServiceName=ORCLPDB1;
+(Default setting AutoCommit=1)
+Command>
+```
+
+2. Check the state of the tables:
+
+```
+
+select count(*) from parent;
```
@@ -123,7 +154,7 @@ select count(*) from appuser.parent;
```
-select count(*) from appuser.child;
+select count(*) from child;
```
@@ -134,9 +165,7 @@ select count(*) from appuser.child;
There is no data in the cached tables.
-## Task 2: Run some queries and observe the behavior
-
-1. Run a query that references the root table's (PARENT table) primary key:
+3. Run a query that references the root table's (PARENT table) primary key:
```
@@ -151,7 +180,35 @@ select * from parent where parent_id = 4;
Even though the table was empty, the query returns a result!
-2. Next, examine the contents of both tables again:
+```
+
+cachegroups ttcacheadm.cg_parent_child;
+
+```
+```
+Cache Group TTCACHEADM.CG_PARENT_CHILD:
+
+ Cache Group Type: Read Only (Dynamic)
+ Autorefresh: Yes
+ Autorefresh Mode: Incremental
+ Autorefresh State: On
+ Autorefresh Interval: 2 Seconds
+ Autorefresh Status: ok
+ Aging: LRU on
+
+ Root Table: APPUSER.PARENT
+ Table Type: Read Only
+
+
+ Child Table: APPUSER.CHILD
+ Table Type: Read Only
+
+1 cache group found.
+
+```
+Note that the Autorefresh State is now set to _On_ because a background dynamic load operation happened on execution of the above select statement.
+
+4. Next, examine the contents of both tables again:
```
@@ -183,7 +240,7 @@ Now that these rows exist in the cache, they will satisfy future read requests.
Let’s try something a little more sophisticated.
-3. Query a row in the CHILD table with a join back to the PARENT table:
+5. Query a row in the CHILD table with a join back to the PARENT table:
```
@@ -248,5 +305,5 @@ Keep your primary session open for use in the next lab.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Chris Jenkins, July 2022
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/timesten/cache-introduction/13-shutdown/shutdown.md b/timesten/cache-introduction/13-shutdown/shutdown.md
index 862bd3517..00541219c 100644
--- a/timesten/cache-introduction/13-shutdown/shutdown.md
+++ b/timesten/cache-introduction/13-shutdown/shutdown.md
@@ -29,39 +29,39 @@ ttStatus
```
```
-TimesTen status report as of Mon Jun 13 13:33:24 2022
+TimesTen status report as of Wed Oct 11 23:36:38 2023
-Daemon pid 256 port 6624 instance ttinst
-TimesTen server pid 263 started on port 6625
+Daemon pid 407 port 6624 instance ttinst
+TimesTen server pid 414 started on port 6625
------------------------------------------------------------------------
------------------------------------------------------------------------
Data store /tt/db/sampledb
-Daemon pid 256 port 6624 instance ttinst
-TimesTen server pid 263 started on port 6625
+Daemon pid 407 port 6624 instance ttinst
+TimesTen server pid 414 started on port 6625
There are 20 connections to the data store
-Shared Memory Key 0x03009db0 ID 2
-PL/SQL Memory Key 0x02009db0 ID 1 Address 0x5000000000
+Shared Memory Key 0x0f00307d ID 8
+PL/SQL Memory Key 0x0e00307d ID 7 Address 0x5000000000
Type PID Context Connection Name ConnID
-Cache Agent 388 0x0000000001a1f040 Marker(139835503965952) 5
-Cache Agent 388 0x00007f2de8020bf0 LogSpaceMon(139835506071296) 4
-Cache Agent 388 0x00007f2df41497a0 Garbage Collector(139835502913 6
-Cache Agent 388 0x00007f2df801f400 Timer 3
-Cache Agent 388 0x00007f2df8237990 Refresher(D,2000) 10
-Cache Agent 388 0x00007f2df8378090 Refresher(S,2000) 9
-Cache Agent 388 0x00007f2e7401fbe0 BMReporter(139835497711360) 8
-Cache Agent 388 0x00007f2e7807cec0 Handler 2
-Subdaemon 261 0x0000000002229fb0 Manager 2047
-Subdaemon 261 0x00000000022aaf30 Rollback 2046
-Subdaemon 261 0x0000000002329da0 Aging 2041
-Subdaemon 261 0x00007f8f58000b60 Checkpoint 2042
-Subdaemon 261 0x00007f8f5807ffb0 HistGC 2039
-Subdaemon 261 0x00007f8f60000b60 Monitor 2044
-Subdaemon 261 0x00007f8f6007ffb0 IndexGC 2038
-Subdaemon 261 0x00007f8f64000b60 Deadlock Detector 2043
-Subdaemon 261 0x00007f8f6407ffb0 Log Marker 2040
-Subdaemon 261 0x00007f8f68000b60 Flusher 2045
-Subdaemon 261 0x00007f8f680a1bb0 XactId Rollback 2037
-Subdaemon 261 0x00007f8fd40b6080 Garbage Collector 2036
+Cache Agent 1146 0x00007f9314021020 Marker(140272157079296) 5
+Cache Agent 1146 0x00007f9314254300 LogSpaceMon(140272159184640) 6
+Cache Agent 1146 0x00007f9320149ae0 Garbage Collector(140270040340 4
+Cache Agent 1146 0x00007f932401f740 Timer 3
+Cache Agent 1146 0x00007f93242ac230 Refresher(D,2000) 10
+Cache Agent 1146 0x00007f932799e6a0 Refresher(S,2000) 9
+Cache Agent 1146 0x00007f93a001ff20 BMReporter(140270035138304) 8
+Cache Agent 1146 0x00007f93a407d5c0 Handler 2
+Subdaemon 411 0x0000000002666fc0 Manager 2047
+Subdaemon 411 0x0000000002708200 Rollback 2046
+Subdaemon 411 0x00000000027a7330 XactId Rollback 2037
+Subdaemon 411 0x00007f5bd4000b60 Monitor 2043
+Subdaemon 411 0x00007f5bd40a0400 Garbage Collector 2036
+Subdaemon 411 0x00007f5bdc000b60 Checkpoint 2042
+Subdaemon 411 0x00007f5be0000b60 Deadlock Detector 2044
+Subdaemon 411 0x00007f5be4000b60 Flusher 2045
+Subdaemon 411 0x00007f5be40c2410 Aging 2041
+Subdaemon 411 0x00007f5c50000df0 HistGC 2040
+Subdaemon 411 0x00007f5c541e54a0 Log Marker 2039
+Subdaemon 411 0x00007f5c58048860 IndexGC 2038
Open for user connections
Replication policy : Manual
Cache Agent policy : Manual
@@ -69,7 +69,8 @@ Cache agent is running.
PL/SQL enabled.
------------------------------------------------------------------------
Accessible by group oinstall
-End of report
+End of report
+
```
The database is active and is loaded in memory because the cache agent is connected to it.
@@ -100,15 +101,15 @@ ttStatus
```
```
-TimesTen status report as of Mon Jun 13 13:35:38 2022
+TimesTen status report as of Wed Oct 11 23:37:45 2023
-Daemon pid 256 port 6624 instance ttinst
-TimesTen server pid 263 started on port 6625
+Daemon pid 407 port 6624 instance ttinst
+TimesTen server pid 414 started on port 6625
------------------------------------------------------------------------
------------------------------------------------------------------------
Data store /tt/db/sampledb
-Daemon pid 256 port 6624 instance ttinst
-TimesTen server pid 263 started on port 6625
+Daemon pid 407 port 6624 instance ttinst
+TimesTen server pid 414 started on port 6625
There are no connections to the data store
Open for user connections
Replication policy : Manual
@@ -132,7 +133,7 @@ ttDaemonAdmin -stop
```
```
-TimesTen Daemon (PID: 190, port: 6624) stopped.
+TimesTen Daemon (PID: 407, port: 6624) stopped.
```
## Task 3: Log out of the TimesTen host
@@ -156,5 +157,5 @@ You can now **proceed to the Wrap Up**.
* **Author** - Chris Jenkins, Senior Director, TimesTen Product Management
* **Contributors** - Doug Hood & Jenny Bloom, TimesTen Product Management
-* **Last Updated By/Date** - Jenny Bloom, March 2023
+* **Last Updated By/Date** - Jenny Bloom, October 2023
diff --git a/tmm-run-sample-apps/env-setup/images/19c-remote-desktop.png b/tmm-run-sample-apps/env-setup/images/19c-remote-desktop.png
new file mode 100644
index 000000000..5d52ac851
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/19c-remote-desktop.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/app-info.png b/tmm-run-sample-apps/env-setup/images/app-info.png
new file mode 100644
index 000000000..9fc41b38b
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/app-info.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/apply-in-progress.png b/tmm-run-sample-apps/env-setup/images/apply-in-progress.png
new file mode 100644
index 000000000..7679374fa
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/apply-in-progress.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/apply-job-success.png b/tmm-run-sample-apps/env-setup/images/apply-job-success.png
new file mode 100644
index 000000000..313fca735
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/apply-job-success.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/applyresults.png b/tmm-run-sample-apps/env-setup/images/applyresults.png
new file mode 100644
index 000000000..033a0b96d
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/applyresults.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/applyresults2.png b/tmm-run-sample-apps/env-setup/images/applyresults2.png
new file mode 100644
index 000000000..b8e8dc2bb
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/applyresults2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/auto-ssh.png b/tmm-run-sample-apps/env-setup/images/auto-ssh.png
new file mode 100644
index 000000000..3c003addf
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/auto-ssh.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/chmod.png b/tmm-run-sample-apps/env-setup/images/chmod.png
new file mode 100644
index 000000000..cd8134262
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/chmod.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/choose-ssh.png b/tmm-run-sample-apps/env-setup/images/choose-ssh.png
new file mode 100644
index 000000000..a2d946c13
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/choose-ssh.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/click-create.png b/tmm-run-sample-apps/env-setup/images/click-create.png
new file mode 100644
index 000000000..84e6ac91e
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/click-create.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/compute-fixed.png b/tmm-run-sample-apps/env-setup/images/compute-fixed.png
new file mode 100644
index 000000000..e3edb438d
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/compute-fixed.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/compute-flex.png b/tmm-run-sample-apps/env-setup/images/compute-flex.png
new file mode 100644
index 000000000..8de486df9
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/compute-flex.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/copy-private-key.png b/tmm-run-sample-apps/env-setup/images/copy-private-key.png
new file mode 100644
index 000000000..f85fe8366
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/copy-private-key.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-10.png b/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-10.png
new file mode 100644
index 000000000..cd8134262
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-10.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-11.png b/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-11.png
new file mode 100644
index 000000000..660e104d3
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/create-stack-novnc-ssh-11.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/create-stack.png b/tmm-run-sample-apps/env-setup/images/create-stack.png
new file mode 100644
index 000000000..563d2610a
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/create-stack.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-cloudshell-ssh.png b/tmm-run-sample-apps/env-setup/images/em-cloudshell-ssh.png
new file mode 100644
index 000000000..31329af71
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-cloudshell-ssh.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-create-stack-1.png b/tmm-run-sample-apps/env-setup/images/em-create-stack-1.png
new file mode 100644
index 000000000..3c9c3a246
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-create-stack-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-create-stack.png b/tmm-run-sample-apps/env-setup/images/em-create-stack.png
new file mode 100644
index 000000000..3b98089fb
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-create-stack.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-mac-linux-ssh-login.png b/tmm-run-sample-apps/env-setup/images/em-mac-linux-ssh-login.png
new file mode 100644
index 000000000..7cd3b99f1
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-mac-linux-ssh-login.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-public-subnet.png b/tmm-run-sample-apps/env-setup/images/em-public-subnet.png
new file mode 100644
index 000000000..26ad6c0d0
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-public-subnet.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-review-create-stack-compute-only.png b/tmm-run-sample-apps/env-setup/images/em-review-create-stack-compute-only.png
new file mode 100644
index 000000000..9bc0591d2
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-review-create-stack-compute-only.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-review-create-stack.png b/tmm-run-sample-apps/env-setup/images/em-review-create-stack.png
new file mode 100644
index 000000000..0b021c558
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-review-create-stack.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-1.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-1.png
new file mode 100644
index 000000000..ff881a261
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-2.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-2.png
new file mode 100644
index 000000000..27f18eb17
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-0.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-0.png
new file mode 100644
index 000000000..195a62dbe
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-0.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-1.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-1.png
new file mode 100644
index 000000000..352e53050
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-2.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-2.png
new file mode 100644
index 000000000..6c8be5c5e
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-3.png b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-3.png
new file mode 100644
index 000000000..f6860f0ab
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-apply-results-3.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-config-existing-vcn.png b/tmm-run-sample-apps/env-setup/images/em-stack-config-existing-vcn.png
new file mode 100644
index 000000000..c9bda59a2
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-config-existing-vcn.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-config-flex.png b/tmm-run-sample-apps/env-setup/images/em-stack-config-flex.png
new file mode 100644
index 000000000..7aa08c265
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-config-flex.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-details-b.png b/tmm-run-sample-apps/env-setup/images/em-stack-details-b.png
new file mode 100644
index 000000000..cfe5e70e0
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-details-b.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-details-post-plan.png b/tmm-run-sample-apps/env-setup/images/em-stack-details-post-plan.png
new file mode 100644
index 000000000..8a91e4699
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-details-post-plan.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-details.png b/tmm-run-sample-apps/env-setup/images/em-stack-details.png
new file mode 100644
index 000000000..7de31a51b
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-details.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-1.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-1.png
new file mode 100644
index 000000000..ee2b8bd8a
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-2.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-2.png
new file mode 100644
index 000000000..8d9311b48
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-1.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-1.png
new file mode 100644
index 000000000..d744188a8
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-2.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-2.png
new file mode 100644
index 000000000..1c9c03418
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-3.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-3.png
new file mode 100644
index 000000000..11a07de94
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-3.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-4.png b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-4.png
new file mode 100644
index 000000000..391041920
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-stack-plan-results-4.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/em-use-existing-vcn.png b/tmm-run-sample-apps/env-setup/images/em-use-existing-vcn.png
new file mode 100644
index 000000000..90ddd9026
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/em-use-existing-vcn.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/error-ad-mismatch.png b/tmm-run-sample-apps/env-setup/images/error-ad-mismatch.png
new file mode 100644
index 000000000..35f8e474a
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/error-ad-mismatch.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/essbase-click-url.png b/tmm-run-sample-apps/env-setup/images/essbase-click-url.png
new file mode 100644
index 000000000..f6130ef2b
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/essbase-click-url.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/essbase-fixed-shape.png b/tmm-run-sample-apps/env-setup/images/essbase-fixed-shape.png
new file mode 100644
index 000000000..f42d29102
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/essbase-fixed-shape.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/essbase-main-config.png b/tmm-run-sample-apps/env-setup/images/essbase-main-config.png
new file mode 100644
index 000000000..8cc16f944
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/essbase-main-config.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/essbase-remote-desktop.png b/tmm-run-sample-apps/env-setup/images/essbase-remote-desktop.png
new file mode 100644
index 000000000..c0511afd2
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/essbase-remote-desktop.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/firefox-launch-1.png b/tmm-run-sample-apps/env-setup/images/firefox-launch-1.png
new file mode 100644
index 000000000..b3e25b316
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/firefox-launch-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/firefox-launch-2.png b/tmm-run-sample-apps/env-setup/images/firefox-launch-2.png
new file mode 100644
index 000000000..443cc1667
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/firefox-launch-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/fixed-shape.png b/tmm-run-sample-apps/env-setup/images/fixed-shape.png
new file mode 100644
index 000000000..858b8fd16
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/fixed-shape.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/flex-shape-error.png b/tmm-run-sample-apps/env-setup/images/flex-shape-error.png
new file mode 100644
index 000000000..a5d72e1b8
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/flex-shape-error.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/invalid-ssh-key.png b/tmm-run-sample-apps/env-setup/images/invalid-ssh-key.png
new file mode 100644
index 000000000..d4080a080
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/invalid-ssh-key.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/main-config-compute-vnc.png b/tmm-run-sample-apps/env-setup/images/main-config-compute-vnc.png
new file mode 100644
index 000000000..d83ed9d76
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/main-config-compute-vnc.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/main-config-compute.png b/tmm-run-sample-apps/env-setup/images/main-config-compute.png
new file mode 100644
index 000000000..b25bbb22b
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/main-config-compute.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/no-e3flex-in-tenant.png b/tmm-run-sample-apps/env-setup/images/no-e3flex-in-tenant.png
new file mode 100644
index 000000000..35e7f726f
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/no-e3flex-in-tenant.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/no-quota.png b/tmm-run-sample-apps/env-setup/images/no-quota.png
new file mode 100644
index 000000000..f95ea2140
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/no-quota.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-deceptive-site-error.png b/tmm-run-sample-apps/env-setup/images/novnc-deceptive-site-error.png
new file mode 100644
index 000000000..41f31f3f4
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-deceptive-site-error.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-1.png b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-1.png
new file mode 100644
index 000000000..25d0b99de
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-1.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-2.png b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-2.png
new file mode 100644
index 000000000..4d95ef9a9
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-3.png b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-3.png
new file mode 100644
index 000000000..e298e6c6d
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-fullscreen-3.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-landing.png b/tmm-run-sample-apps/env-setup/images/novnc-landing.png
new file mode 100644
index 000000000..05711efc8
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-landing.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-launch-get-started.png b/tmm-run-sample-apps/env-setup/images/novnc-launch-get-started.png
new file mode 100644
index 000000000..0b6716c49
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-launch-get-started.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-login-ssh.png b/tmm-run-sample-apps/env-setup/images/novnc-login-ssh.png
new file mode 100644
index 000000000..a56e64c60
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-login-ssh.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/novnc-login.png b/tmm-run-sample-apps/env-setup/images/novnc-login.png
new file mode 100644
index 000000000..4f2923f6a
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/novnc-login.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/paste-ssh.png b/tmm-run-sample-apps/env-setup/images/paste-ssh.png
new file mode 100644
index 000000000..58e1b158d
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/paste-ssh.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/putty-auth.png b/tmm-run-sample-apps/env-setup/images/putty-auth.png
new file mode 100644
index 000000000..1a7eb6cc3
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/putty-auth.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/putty-data.png b/tmm-run-sample-apps/env-setup/images/putty-data.png
new file mode 100644
index 000000000..05cf599ba
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/putty-data.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/putty-ppk.png b/tmm-run-sample-apps/env-setup/images/putty-ppk.png
new file mode 100644
index 000000000..9405eac04
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/putty-ppk.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/putty-session.png b/tmm-run-sample-apps/env-setup/images/putty-session.png
new file mode 100644
index 000000000..1fe39b85f
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/putty-session.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/run-apply.png b/tmm-run-sample-apps/env-setup/images/run-apply.png
new file mode 100644
index 000000000..c3f5b39ac
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/run-apply.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/select-zip.png b/tmm-run-sample-apps/env-setup/images/select-zip.png
new file mode 100644
index 000000000..91aa34abe
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/select-zip.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/ssh-first-time.png b/tmm-run-sample-apps/env-setup/images/ssh-first-time.png
new file mode 100644
index 000000000..d190d6c0b
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/ssh-first-time.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/ssh-login.png b/tmm-run-sample-apps/env-setup/images/ssh-login.png
new file mode 100644
index 000000000..660e104d3
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/ssh-login.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/standardshape-2.png b/tmm-run-sample-apps/env-setup/images/standardshape-2.png
new file mode 100644
index 000000000..a63ef3072
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/standardshape-2.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/standardshape.png b/tmm-run-sample-apps/env-setup/images/standardshape.png
new file mode 100644
index 000000000..a6096e4b5
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/standardshape.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/terraformactions.png b/tmm-run-sample-apps/env-setup/images/terraformactions.png
new file mode 100644
index 000000000..1f6a6b6f6
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/terraformactions.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/use-exisiting-vcn.png b/tmm-run-sample-apps/env-setup/images/use-exisiting-vcn.png
new file mode 100644
index 000000000..36389cca6
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/use-exisiting-vcn.png differ
diff --git a/tmm-run-sample-apps/env-setup/images/zip-file.png b/tmm-run-sample-apps/env-setup/images/zip-file.png
new file mode 100644
index 000000000..640f75214
Binary files /dev/null and b/tmm-run-sample-apps/env-setup/images/zip-file.png differ
diff --git a/tmm-run-sample-apps/env-setup/setup-compute-novnc-ssh.md b/tmm-run-sample-apps/env-setup/setup-compute-novnc-ssh.md
new file mode 100644
index 000000000..9cbdf057f
--- /dev/null
+++ b/tmm-run-sample-apps/env-setup/setup-compute-novnc-ssh.md
@@ -0,0 +1,322 @@
+# Set up compute instance
+
+## Introduction
+This lab will show you how to set up a Resource Manager stack that will generate the Oracle Cloud objects needed to run your workshop.
+
+Estimated Time: 15 minutes
+
+Watch the video below for a walk-through of the Environment Setup lab.
+[Lab walk-through](videohub:1_icfgp61i)
+
+### About Terraform and Oracle Cloud Resource Manager
+For more information about Terraform and Resource Manager, please see the appendix below.
+
+### Objectives
+- Create Compute + Networking using Resource Manager Stack
+- Connect to compute instance
+
+### Prerequisites
+This lab assumes you have:
+- An Oracle Cloud account
+- SSH Keys (optional)
+- At least 4 OCPUs, 24 GB memory, and 128 GB of bootable storage volume is available in your Oracle Cloud Infrastructure tenancy
+- You have completed:
+ - Lab: Prepare Setup
+
+## Task 1: Create Stack: Choose a Path
+Proceed to deploy your workshop environment using Oracle Resource Manager (ORM) stack
+
+Your options are:
+1. Task 1A: Create Stack: **Compute + Networking** *(recommended)*
+2. Task 1B: Create Stack: **Compute Only**
+
+## Task 1A: Create Stack: Compute + Networking
+1. Identify the ORM stack zip file downloaded in *Lab: Prepare Setup*
+2. Log in to Oracle Cloud
+3. Open up the hamburger menu in the top left corner. Click **Developer Services**, and choose **Resource Manager > Stacks**. Choose the compartment in which you would like to install the stack. Click **Create Stack**.
+
+ ![Select Stacks](https://oracle-livelabs.github.io/common/images/console/developer-resmgr-stacks.png " ")
+
+ ![Create Stack](./images/create-stack.png " ")
+
+4. Select **My Configuration**, choose the **.Zip file** button, click the **Browse** link, and select the zip file that you downloaded or drag-n-drop for the file explorer.
+
+ ![Select zip file](./images/select-zip.png " ")
+
+5. Click **Next**.
+
+6. Enter or select the following:
+
+ ![Enter main configurations](./images/main-config-compute-vnc.png " ")
+
+ - **Instance Count:** Accept the default, **1**, unless you intend to create more than one (e.g. for a team)
+ - **Select Availability Domain:** Select an availability domain from the dropdown list.
+ - **Need Remote Access via SSH?** In this step you have 3 options to select from:
+ - **Option (A)** - Keep Unchecked for Remote Desktop only Access - The Default
+ - **Option (B)** - Check *Need Remote Access via SSH?* and keep *Auto Generate SSH Key Pair* unchecked to enable remote access via SSH protocol, then provide the SSH public key(s).
+
+ - **SSH Public Key**: Select from the following two options
+ - *Paste SSH Keys*: Paste the plaintext key strings or
+ - *Choose SSH Key Files*: Drag-n-drop or browse and select valid public keys of *openssh* format from your computer
+
+ ![Paste SSH keys](./images/paste-ssh.png " ")
+
+ ![Choose SSH keys](./images/choose-ssh.png " ")
+
+ >**Notes:**
+ 1. This assumes that you already have an RSA-type SSH key pair available on the local system where you will be connecting from. If you don't and for more info on creating and using SSH keys for your specific platform and client, please refer to the guide [Generate SSH Keys](https://oracle-livelabs.github.io/common/labs/generate-ssh-key)
+ 2. If you used the Oracle Cloud Shell to create your key, make sure you paste the pub file in a notepad, and remove any hard returns. The file should be one line or you will not be able to login to your compute instance
+
+ - **Option (C)** - Check *Need Remote Access via SSH?* and *Auto Generate SSH Key Pair* to have the keys auto-generated for you during provisioning. If you select this option you will be provided with the private key post provisioning.
+
+ ![Auto-generate SSH keys](./images/auto-ssh.png " ")
+
+ Depending on the quota you have in your tenancy you can choose from standard Compute shapes or Flex shapes. Please visit the Appendix: Troubleshooting Tips for instructions on checking your quota
+
+ - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked.
+ - **Instance Shape:** Select VM.Standard.E4.Flex.
+ - **Instance OCPUS:** Enter 4 to provision an instance with 4 OCPUs. This provisions a VM with 4 OCPUs and 24GB memory.
+
+7. For this section we will provision a new VCN with all the appropriate ingress and egress rules needed to run this workshop. If you already have a VCN, make sure it has all of the correct ingress and egress rules and skip to the next section.
+ - **Use Existing VCN?:** Accept the default by leaving this unchecked. This will create a **new VCN**.
+
+8. Click **Next**.
+9. Select **Run Apply** and click **Create**.
+
+ ![Run Apply](./images/run-apply.png " ")
+
+10. Your stack has now been created and the *Apply* action triggered is running to deploy your environment!
+
+ ![Apply is successful](./images/apply-job-success.png " ")
+
+You may now proceed to Task 2 (skip Task 1B).
+
+## Task 1B: Create Stack: Compute Only
+If you just completed Task 1A, please proceed to Task 2. If you have an existing VCN and are comfortable updating VCN configurations, please ensure your VCN meets the minimum requirements. Refer to *Lab: Prepare Setup*.
+
+ >**Note:** We recommend letting our stack create the VCN to reduce the potential for errors.
+
+1. Identify the ORM stack zip file downloaded in *Lab: Prepare Setup*
+2. Log in to Oracle Cloud
+3. Open up the hamburger menu in the top left corner. Click **Developer Services**, and choose **Resource Manager > Stacks**. Choose the compartment in which you would like to install the stack. Click **Create Stack**.
+
+ ![Select Stacks](https://oracle-livelabs.github.io/common/images/console/developer-resmgr-stacks.png " ")
+
+ ![Create Stack](./images/create-stack.png " ")
+
+4. Select **My Configuration**, choose the **.Zip file** button, click the **Browse** link, and select the zip file that you downloaded or drag-n-drop for the file explorer.
+
+ ![Choose zip](./images/select-zip.png " ")
+
+ Enter the following information:
+ - **Name**: Enter a name or keep the prefilled default (*DO NOT ENTER ANY SPECIAL CHARACTERS HERE*, including periods, underscores, exclamation, etc, it will mess up the configuration and you will get an error during the apply process)
+ - **Description**: Same as above
+ - **Create in compartment**: Select the correct compartment if not already selected
+
+ >**Note:** If this is a newly provisioned tenant such as freetier with no user-created compartment, stop here and first create it before proceeding.
+
+5. Click **Next**.
+
+6. Enter or select the following:
+
+ ![Enter main configurations](./images/main-config-compute.png " ")
+
+ - **Instance Count:** Accept the default, **1**, unless you intend to create more than one (e.g. for a team)
+ - **Select Availability Domain:** Select an availability domain from the dropdown list.
+ - **Need Remote Access via SSH?** In this step you have 3 options to select from:
+ - **Option (A)** - Keep Unchecked for Remote Desktop only Access - The Default
+ - **Option (B)** - Check *Need Remote Access via SSH?* and keep *Auto Generate SSH Key Pair* unchecked to enable remote access via SSH protocol, then provide the SSH public key(s).
+
+ - **SSH Public Key**: Select from the following two options
+ - *Paste SSH Keys*: Paste the plaintext key strings or
+ - *Choose SSH Key Files*: Drag-n-drop or browse and select valid public keys of *openssh* format from your computer
+
+ ![Paste SSH keys](./images/paste-ssh.png " ")
+
+ ![select SSH keys](./images/choose-ssh.png " ")
+
+ >**Notes:**
+ 1. This assumes that you already have an RSA-type SSH key pair available on the local system where you will be connecting from. If you don't and for more info on creating and using SSH keys for your specific platform and client, please refer to the guide [Generate SSH Keys](https://oracle-livelabs.github.io/common/labs/generate-ssh-key)
+ 2. If you used the Oracle Cloud Shell to create your key, make sure you paste the pub file in a notepad, and remove any hard returns. The file should be one line or you will not be able to login to your compute instance
+
+ - **Option (C)** - Check *Need Remote Access via SSH?* and *Auto Generate SSH Key Pair* to have the keys auto-generated for you during provisioning. If you select this option you will be provided with the private key post provisioning.
+
+ ![Auto-generate SSH keys](./images/auto-ssh.png " ")
+
+ Depending on the quota you have in your tenancy you can choose from standard Compute shapes or Flex shapes. Please visit the Appendix: Troubleshooting Tips for instructions on checking your quota
+
+ - **Use Flexible Instance Shape with Adjustable OCPU Count?:** Keep the default as checked.
+ - **Instance Shape:** Select VM.Standard.E4.Flex.
+ - **Instance OCPUS:** Enter 4 to provision an instance with 4 OCPUs. This provisions a VM with 4 OCPUs and 24GB memory.
+
+7. For this section we will an existing VNC. Please make sure it has all of the correct ingress and egress rules otherwise go back to *Task 1A* and deploy with a self-contained VCN.
+ - **Use Existing VCN?:** Check to select.
+ - **Select Existing VCN:** Select existing VCN with the regional public subnet and required security list.
+
+ >**Note:** For an existing VCN Option to be used successfully, read *Appendix 3* at the bottom of this lab.
+
+ ![Use existing VCN](./images/use-exisiting-vcn.png " ")
+
+9. Select **Run Apply** and click **Create**.
+ ![Click Create](./images/click-create.png " ")
+
+9. Your stack is now created and the *Apply* action triggered is running to deploy your environment!
+
+ ![Apply job in progress](./images/apply-in-progress.png " ")
+
+## Task 2: Terraform Apply
+In the prior steps, we elected to trigger the *terraform apply action* on stack creation.
+
+1. Review the job output.
+
+ ![Job output](./images/apply-job-success.png " ")
+
+2. Congratulations, your environment has been created! Click the **Application Information** tab to get additional information about what you have just done.
+
+3. Your public IP address(es), instance name(s), and remote desktop URL are displayed.
+
+## Task 3: Access the Graphical Remote Desktop
+For ease of execution of this workshop, your VM instance has been pre-configured with a remote graphical desktop accessible using any modern browser on your laptop or workstation. Proceed as detailed below to log in.
+
+1. Navigate to **Stack Details** -> **Application Information** tab, and click the **Remote Desktop** URL.
+
+ ![Click Remote Desktop URL](./images/19c-remote-desktop.png " ")
+
+ ![URL opens](./images/novnc-login-ssh.png " ")
+
+ This should take you directly to your remote desktop in a single click.
+
+ ![Remote desktop displayed](./images/novnc-launch-get-started.png " ")
+
+ >**Note:** While rare, you may see an error on the browser - “*Deceptive Site Ahead*” or similar depending on your browser type as shown below.
+
+ Public IP addresses used for LiveLabs provisioning come from a pool of reusable addresses and this error is because the address was previously used by a compute instance long terminated, but that wasn't properly secured, got bridged, and was flagged. You can safely ignore and proceed by clicking on *Details*, and finally, on *Visit this unsafe site*.
+
+ ![Deceptive error](./images/novnc-deceptive-site-error.png " ")
+
+You may now **proceed to the next lab**.
+
+## Appendix 1: Use Auto-generated SSH Keys to Connect to Your Instance via an SSH Terminal
+
+If you elected to auto-generate the SSH key pair at provisioning, proceed as indicated below.
+
+In this example, we will be illustrating a connection from a Unix-style terminal such as *Mobaxterm*, MacOS terminal, etc.. For *Putty* on Windows, please refer to the guide [Generate SSH Keys](https://oracle-livelabs.github.io/common/labs/generate-ssh-key) on how to convert the key to the required *.ppk* format.
+
+1. Click *Copy* to get the private key and paste it into a file on the system with an SSH client where you intend to initiate the connection. e.g. *mykey_rsa*.
+
+ ![Copy private key](./images/copy-private-key.png " ")
+
+2. Restrict the permissions on the file to *0600*
+
+ ```text
+
+ chmod 600 mykey_rsa
+
+ ```
+
+ ![Restrict permission to file](./images/chmod.png " ")
+
+3. Connect to your instance using the key.
+
+ ```text
+
+ ssh -i opc@
+
+ ```
+ ![SSH connect to instance](./images/ssh-login.png " ")
+
+## Appendix 2: Terraform and Resource Manager
+Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. In this lab, a configuration file has been created for you to build a network and compute components. The compute component you will build creates an image out of Oracle's Cloud Marketplace. This image is running Oracle Linux 7.
+
+Resource Manager is an Oracle Cloud Infrastructure service that allows you to automate the process of provisioning your Oracle Cloud Infrastructure resources. Using Terraform, Resource Manager helps you install, configure, and manage resources through the "infrastructure-as-code" model. To learn more about OCI Resource Manager, watch the video below.
+
+[](youtube:udJdVCz5HYs)
+
+## Appendix 3: Troubleshooting Tips
+If you encountered any issues during the lab, follow the steps below to resolve them. If you are unable to resolve them, please go to the **Need Help** lab on the left menu to submit your issue to our support emailbox.
+- Availability Domain Mismatch
+- Limits Exceeded
+- Flex Shape Not Found
+- Instance shape selection grayed out
+
+### **Issue #1:** Availability Domain Mismatch
+![Availability domain mismatch error](./images/error-ad-mismatch.png " ")
+
+#### Issue #1 Description
+When creating a stack and using an existing VCN, the availability domain and the subnet must match, otherwise the stack errors.
+
+#### Fix for Issue #1
+1. Click **Stack**-> **Edit Stack** -> **Configure Variables**.
+2. Scroll down to the network definition.
+3. Make sure the Availability Domain number matches the subnet number. E.g. If you choose AD-1, you must also choose subnet #1.
+4. Click **Next**
+5. Click **Save Changes**
+6. Click **Terraform Actions** -> **Apply**
+
+### **Issue #2:** Flex Shape Not Found
+![flex shape not found error](./images/flex-shape-error.png " ")
+
+#### Issue #2 Description
+When creating a stack, your ability to create an instance is based on the capacity you have available for your tenancy.
+
+#### Fix for Issue #2
+If you have other compute instances you are not using, you can go to those instances and delete them. If you are using them, follow the instructions to check your available usage and adjust your variables.
+1. Click the Hamburger menu on the top left corner, go to **Governance** -> **Limits, Quotas and Usage**
+2. Select **Compute**
+3. These labs use the following compute types. Check your limit, your usage and the amount you have available in each availability domain (click **Scope** to change Availability Domain)
+4. Look for *Cores for Standard.E2 based VM and BM instances*, *Cores for Standard.xx.Flex based VM and BM instances*, and *Cores for Optimized3 based VM and BM instances*
+5. Click the hamburger menu -> **Resource Manager** -> **Stacks**
+6. Click the stack you created previously
+7. Click **Edit Stack** -> **Configure Variables**.
+8. Scroll down to Options
+9. Change the **shape** based on the availability you have in your system
+10. Click **Next**
+11. Click **Save Changes**
+12. Click **Terraform Actions** -> **Apply**
+
+### **Issue #3:** Limits Exceeded
+
+![limits exceeded error](./images/no-quota.png " ")
+
+#### Issue #3 Description
+When creating a stack, your ability to create an instance is based on the capacity you have available for your tenancy.
+
+*Please ensure that you have available cloud credits. Go to **Governance** -> **Limits, Quotas and Usage,** select **compute**, and ensure that you have **more than** the micro tier available. If you have only 2 micro computes, this workshop will NOT run.*
+
+#### Fix for Issue #3
+If you have other compute instances you are not using, you can go to those instances and delete them. If you are using them, follow the instructions to check your available usage and adjust your variables.
+
+1. Click the Hamburger menu, go to **Governance** -> **Limits, Quotas and Usage**
+2. Select **Compute**
+3. These labs use the following compute types. Check your limit, your usage and the amount you have available in each availability domain (click **Scope** to change Availability Domain)
+4. Look for *Cores for Standard.E2 based VM and BM instances*, *Cores for Standard.xx.Flex based VM and BM instances*, and *Cores for Optimized3 based VM and BM instances*
+5. Click the Hamburger menu -> **Resource Manager** -> **Stacks**
+6. Click the stack you created previously
+7. Click **Edit Stack** -> **Configure Variables**.
+8. Scroll down to **Options**
+9. Change the **shape** based on the availability you have in your system
+10. Click **Next**
+11. Click **Save Changes**
+12. Click **Terraform Actions** -> **Apply**
+
+### **Issue #4:** Instance Shape LOV Selection Grayed Out
+
+![Instance Shape LOV Selection Grayed Out Error](./images/no-e3flex-in-tenant.png " ")
+
+#### Issue #4 Description
+When creating a stack, select the option *"Use Flexible Instance Shape with Adjustable OCPU Count"*, but the *"Instance Shape"* LOV selection is grayed out, and the following error message is displayed:***"Specify a value that satisfies the following regular expression: ^VM\.(Standard\.E3\.Flex)$"***.
+
+This issue is an indication that your tenant is not currently configured to use flexible shapes (e3flex).
+
+#### Fix for Issue #4
+Modify your stack to use fixed shapes instead.
+
+1. Uncheck the option *"Use Flexible Instance Shape with Adjustable OCPU Count"* to use a fixed shape instead.
+![Use fixed shapes](./images/standardshape.png " ")
+
+You may now **proceed to the next lab**.
+
+## Acknowledgements
+* **Author** - Rene Fontcha, LiveLabs Platform Lead, NA Technology
+* **Contributors** - Marion Smith, Technical Program Manager, Arabella Yao, Database Product Manager
+* **Last Updated By/Date** - Arabella Yao, Jan 2023
\ No newline at end of file
diff --git a/tmm-run-sample-apps/introduction/introduction.md b/tmm-run-sample-apps/introduction/introduction.md
index 67622d487..0d690f913 100644
--- a/tmm-run-sample-apps/introduction/introduction.md
+++ b/tmm-run-sample-apps/introduction/introduction.md
@@ -2,31 +2,32 @@
## About this Workshop
-The labs in this workshop walk you through all the steps to run sample applications using Oracle® Transaction Manager for Microservices. Using samples is the fastest way for you to get familiar with MicroTx. Each sample application contains multiple microservices to demonstrate how MicroTx manages transactions that span several microservices.
+The labs in this workshop walk you through all the steps to run sample applications using Oracle® Transaction Manager for Microservices (MicroTx). Using samples is the fastest way for you to get familiar with MicroTx. Each sample application contains multiple microservices that demonstrate how MicroTx manages transactions that span several microservices.
-Estimated Workshop Time: *1 hours 20 minutes*
+Estimated Workshop Time: *50 minutes*
### Objectives
In this workshop, you will learn how to:
-* Run a sample application that uses the LRA transaction protocol. Learn how MicroTx manages LRA transactions.
-* Run a sample application that uses the XA transaction protocol. Learn how MicroTx manages XA transactions.
+* Run a Travel Agent application, which uses the LRA transaction protocol, to book a hotel and flight ticket. Learn how MicroTx manages LRA transactions.
+* Run a Transfer application, which uses the XA transaction protocol, to transfer an amount from one department to another. Learn how MicroTx manages XA transactions.
### Prerequisites
This lab assumes you have:
- An Oracle Cloud account
+- At least 4 OCPUs, 24 GB memory, and 128 GB of bootable storage volume is available in your Oracle Cloud Infrastructure tenancy.
Let's begin! If you need to create an Oracle Cloud account, click **Get Started** in the **Contents** menu on the left. Otherwise, if you have an existing account, click **Lab 1**.
## Task: Learn More
-* [Oracle® Transaction Manager for Microservices Developer Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/index.html)
-* [Oracle® Transaction Manager for Microservices Quick Start Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmqs/index.html)
+* [Oracle® Transaction Manager for Microservices Developer Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/index.html)
+* [Oracle® Transaction Manager for Microservices Quick Start Guide](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmqs/index.html)
## Acknowledgements
-* **Author** - Sylaja Kannan, Principal User Assistance Developer
-* **Contributors** - Brijesh Kumar Deo
-* **Last Updated By/Date** - Sylaja Kannan, October 2022
+* **Author** - Sylaja Kannan, Consulting User Assistance Developer
+* **Contributors** - Brijesh Kumar Deo, Bharath MC
+* **Last Updated By/Date** - Sylaja Kannan, November 2023
diff --git a/tmm-run-sample-apps/prepare-setup/prepare-setup.md b/tmm-run-sample-apps/prepare-setup/prepare-setup.md
index b541f4af9..29290d01a 100644
--- a/tmm-run-sample-apps/prepare-setup/prepare-setup.md
+++ b/tmm-run-sample-apps/prepare-setup/prepare-setup.md
@@ -21,7 +21,7 @@ This lab assumes you have:
## Task 1: Download Oracle Resource Manager (ORM) stack ZIP file
1. Click the following link to download the Resource Manager ZIP file that you need to build your environment.
- - [tmm-mkplc-freetier.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/PszwMj5X-ILvE5_5yNipJvl2qTiqDxcFEjC219McuqtGmycd2vAQhlfaXTX7gfuY/n/natdsecurity/b/stack/o/tmm-mkplc-freetier.zip)
+ - [tmm-mkplc-freetier.zip](https://objectstorage.us-ashburn-1.oraclecloud.com/p/VEKec7t0mGwBkJX92Jn0nMptuXIlEpJ5XJA-A6C9PymRgY2LhKbjWqHeB5rVBbaV/n/c4u04/b/livelabsfiles/o/data-management-library-files/tmm-mkplc-freetier.zip)
2. Save the ZIP file in your downloads folder.
diff --git a/tmm-run-sample-apps/run-lra-app/images/ingress-gateway-ip-address.png b/tmm-run-sample-apps/run-lra-app/images/ingress-gateway-ip-address.png
index 5f92f8a56..bf74d045d 100644
Binary files a/tmm-run-sample-apps/run-lra-app/images/ingress-gateway-ip-address.png and b/tmm-run-sample-apps/run-lra-app/images/ingress-gateway-ip-address.png differ
diff --git a/tmm-run-sample-apps/run-lra-app/images/lra-confirmation.png b/tmm-run-sample-apps/run-lra-app/images/lra-confirmation.png
index e4245058f..d22de9f10 100644
Binary files a/tmm-run-sample-apps/run-lra-app/images/lra-confirmation.png and b/tmm-run-sample-apps/run-lra-app/images/lra-confirmation.png differ
diff --git a/tmm-run-sample-apps/run-lra-app/images/lra-sample-app.png b/tmm-run-sample-apps/run-lra-app/images/lra-sample-app.png
index 915466a4b..b92ad6545 100644
Binary files a/tmm-run-sample-apps/run-lra-app/images/lra-sample-app.png and b/tmm-run-sample-apps/run-lra-app/images/lra-sample-app.png differ
diff --git a/tmm-run-sample-apps/run-lra-app/images/trip-confirmation-json.png b/tmm-run-sample-apps/run-lra-app/images/trip-confirmation-json.png
index 4c9632731..16ce9f5ad 100644
Binary files a/tmm-run-sample-apps/run-lra-app/images/trip-confirmation-json.png and b/tmm-run-sample-apps/run-lra-app/images/trip-confirmation-json.png differ
diff --git a/tmm-run-sample-apps/run-lra-app/run-lra-app.md b/tmm-run-sample-apps/run-lra-app/run-lra-app.md
index 48225a5f1..cd8b8b056 100644
--- a/tmm-run-sample-apps/run-lra-app/run-lra-app.md
+++ b/tmm-run-sample-apps/run-lra-app/run-lra-app.md
@@ -1,23 +1,23 @@
-# Run an LRA Sample Application
+# Run Travel Agent App which Uses LRA
## Introduction
-Run a sample application that uses the Long Running Action (LRA) transaction protocol to book a trip and understand how you can use Transaction Manager for Microservices (MicroTx) to coordinate the transactions. Using samples is the fastest way for you to get familiar with MicroTx.
-The sample application code is available in the MicroTx distribution. The MicroTx library files are already integrated with the sample application code.
+Run a Travel Agent application that uses the Long Running Action (LRA) transaction protocol to book a trip and understand how you can use Oracle Transaction Manager for Microservices (MicroTx) to coordinate the transactions. Using samples is the fastest way for you to get familiar with MicroTx.
+Code for the Travel Agent application is available in the MicroTx distribution. The MicroTx library files are already integrated with the application code.
Estimated Time: *10 minutes*
Watch the video below for a quick walk-through of the lab.
-[Run an LRA Sample Application](videohub:1_0g2khxyc)
+[Run the Travel Agent App Using LRA](videohub:1_0g2khxyc)
-### About LRA Sample Application
+### About the Travel Agent Application
-The sample application demonstrates how you can develop microservices that participate in LRA transactions while using MicroTx to coordinate the transactions. When you run the application, it makes a provisional booking by reserving a hotel room and a flight ticket. Only when you provide approval to confirm the provisional booking, the booking of the hotel room and flight ticket is confirmed. If you cancel the provisional booking, the hotel room and flight ticket that was blocked is released and the booking is canceled. The flight service in this example allows only two confirmed bookings by default. To test the failure scenario, the flight service sample app rejects any additional booking requests after two confirmed bookings. This leads to the cancellation (compensation) of a provisionally booked hotel within the trip and the trip is not booked.
+The Travel Agent application demonstrates how you can develop microservices that participate in LRA transactions while using MicroTx to coordinate the transactions. When you run the application, it makes a provisional booking by reserving a hotel room and a flight ticket. Only when you provide approval to confirm the provisional booking, the booking of the hotel room and flight ticket is confirmed. If you cancel the provisional booking, the hotel room and flight ticket that was blocked is released and the booking is canceled. The flight service in this example allows only two confirmed bookings by default. To test the failure scenario, the flight service sample app rejects any additional booking requests after two confirmed bookings. This leads to the cancellation (compensation) of a provisionally booked hotel within the trip and the trip is not booked.
The following figure shows a sample LRA application, which contains several microservices, to demonstrate how you can develop microservices that participate in LRA transactions.
![Microservices in sample LRA application](./images/lra-sample-app.png)
-For more details, see [About the Sample LRA Application](https://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/set-sample-applications.html#GUID-C5332159-BD13-4210-A02E-475107919FD9) in the *Transaction Manager for Microservices Developer Guide*.
+For more details, see [About the Sample LRA Application](https://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/set-sample-applications.html#GUID-C5332159-BD13-4210-A02E-475107919FD9) in the *Transaction Manager for Microservices Developer Guide*.
### Objectives
@@ -25,10 +25,10 @@ In this lab, you will:
* Configure Minikube
* Start a tunnel between Minikube and MicroTx
-* Deploy Kiali and Jaeger in your minikube cluster (Optional)
-* Run the LRA sample application
-* View service graph of the mesh and distributed traces to track requests (Optional)
-* View source code of the sample application (Optional)
+* Deploy Kiali and Jaeger in your Minikube cluster (optional)
+* Run the Travel Agent application
+* View service graph of the mesh and distributed traces to track requests (optional)
+* View source code of the Travel Agent application (optional)
### Prerequisites
@@ -49,7 +49,7 @@ This lab assumes you have:
## Task 1: Configure Minikube
-Follow the instructions in this section to configure Minikube, and then run a sample application.
+Follow the instructions in this section to configure Minikube, and then run the Travel Agent application.
1. Click **Activities** in the remote desktop window to open a new terminal.
@@ -133,11 +133,11 @@ Before you start a transaction, you must start a tunnel between Minikube and Mic
```
-## Task 3: Deploy Kiali and Jaeger in the cluster (Optional)
-This optional task lets you deploy Kiali and Jaeger in the minikube cluster to view the service mesh graph and enable distributed tracing.
-Distributed tracing enables tracking a request through service mesh that is distributed across multiple services. This allows a deeper understanding about request latency, serialization and parallelism via visualization.
-You will be able to visualize the service mesh and the distributed traces after you have run the sample application in the following task.
-The following commands can be executed to deploy Kiali and Jaeger. Kiali requires prometheus which should also be deployed in the cluster.
+## Task 3: Deploy Kiali and Jaeger in the Cluster (Optional)
+
+Use distributed tracing to understand how requests flow between MicroTx and the microservices. Use tools, such as Kiali and Jaeger, to track and trace distributed transactions in MicroTx. Kiali requires Prometheus, so deploy Prometheus in the same cluster.
+
+Run the following commands to deploy Kiali and Jaeger.
1. Deploy Kiali.
@@ -160,43 +160,43 @@ The following commands can be executed to deploy Kiali and Jaeger. Kiali require
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/jaeger.yaml
```
-4. Start Kiali Dashboard. Open a new tab in the terminal window and execute the following command. Leave the terminal running. A browser window may pop up as well. Close the browser window.
+4. Start Kiali Dashboard. Open a new tab in the terminal window and then run the following command. Leave the terminal running. If a new browser window appears, close the browser window.
```text
istioctl dashboard kiali
```
- An output will show a URL on which you can access the kiali dashboard in a browser tab:
- http://localhost:20001/kiali
-5. Start Jaeger Dashboard. Open a new tab in the terminal window and execute the following command. Leave the terminal running. A browser window may pop up as well. Close the browser window.
+ A URL is displayed. Open the URL in a new tab in your browser to access the Jaeger dashboard. For example, `http://localhost:20001/kiali.`
+
+5. Start Jaeger Dashboard. Open a new tab in the terminal window and then run the following command. Leave the terminal running. If a new browser window appears, close the browser window.
```text
istioctl dashboard jaeger
```
- An output will show a URL on which you can access the jaeger dashboard in a browser tab:
- http://localhost:16686
-## Task 4: Run the LRA sample application
+ A URL is displayed. Open the URL in a new tab in your browser to access the Jaeger dashboard. For example, `http://localhost:16686`.
+
+## Task 4: Run the Travel Agent Application
-Run the sample LRA application to book a hotel room and flight ticket.
+Run the Travel Agent application to book a hotel room and flight ticket.
1. Run the Trip Client application.
```text
- cd /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo/trip-client
+ cd /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo/trip-client
java -jar target/trip-client.jar
```
The Trip Booking Service console is displayed.
-2. Type **y** to confirm that you want to run the LRA sample application, and then press Enter.
-The sample application provisionally books a hotel room and a flight ticket and displays the details of the provisional booking.
+2. Type **y** to confirm that you want to run the Travel Agent application, and then press Enter.
+The Travel Agent application provisionally books a hotel room and a flight ticket and displays the details of the provisional booking.
3. Type **y** to confirm the provisional booking, and then press Enter.
@@ -233,38 +233,40 @@ The sample application provisionally books a hotel room and a flight ticket and
```
-## Task 5: View Service Mesh graph and Distributed Traces (Optional)
-You can perform this task only if you have performed Task 3.
-To visualize what happens behind the scenes and how a trip booking request is processed by the distributed services, you can use the Kiali and Jaeger Dashboards that you started in Task 3.
-1. Open a new browser tab and navigate to the Kiali dashboard URL - http://localhost:20001/kiali
+## Task 5: View Service Mesh Graph and Distributed Traces (Optional)
-2. Select Graph for the otmm namespace.
+You can perform this task only if you have performed Task 3. To visualize what happens behind the scenes and how a trip booking request is processed by the distributed services, you can use the Kiali and Jaeger dashboards that you started in Task 3.
+
+1. Open a new browser tab and navigate to the Kiali dashboard URL. For example, `http://localhost:20001/kiali`.
+
+2. Select Graph for the `otmm` namespace.
![Kiali Dashboard](images/kiali-dashboard-lra.png)
-3. Open a new browser tab and navigate to the Jaeger dashboard URL - http://localhost:16686
-4. Select istio-ingressgateway.istio-system from the Service list. You can see the list of traces with each trace representing a request.
+3. Open a new browser tab and navigate to the Jaeger dashboard URL. For example, `http://localhost:16686.`
+4. From the **Service** drop-down list, select **istio-ingressgateway.istio-system**.
+5. Click **Find Traces**. You can see the list of traces with each trace representing a request.
![Jaeger Traces List](images/jaeger-traces-list.png)
-5. Select one of the traces to view.
+6. Select one of the traces to view.
![Jaeger Trace for Confirmation Step](images/jaeger-trace-confirm-cancel.png)
-## Task 6: View source code of the sample application (Optional)
-The source code of the sample application is present in folder: /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo
-- Trip Service Source code: /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo/trip-manager
-- Hotel Service Source code: /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo/hotel
-- Flight Service Source code: /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo/flight
-- Trip Client Source code: /home/oracle/OTMM/otmm-22.3/samples/lra/lrademo/trip-client
+## Task 6: View Source Code of the Travel Agent Application (Optional)
-You can use the VIM editor to view the source code files. You can also use the Text Editor application to view the source code files. To bring up the Text Editor, click on Activities (top left) -> Show Applications -> Text Editor. Inside Text Editor, select Open a File and browse to the source code files in the folders shown above.
+The source code of the Travel Agent application is present in folder: /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo
+- Trip Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo/trip-manager
+- Hotel Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo/hotel
+- Flight Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo/flight
+- Trip Client Source code: /home/oracle/OTMM/otmm-23.4.1/samples/lra/lrademo/trip-client
+You can use the VIM editor to view the source code files. You can also use the Text Editor application to view the source code files. To bring up the Text Editor, click on Activities (top left) -> Show Applications -> Text Editor. Inside Text Editor, select Open a File and browse to the source code files in the folders shown above.
You may now **proceed to the next lab** to run a sample XA application. If you do not want to proceed further and would like to finish the LiveLabs and clean up the resources, then complete **Lab 6: Environment Clean Up**.
## Learn More
-* [Develop Applications with LRA](https://doc.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/develop-lra-applications.html#GUID-63827BB6-7993-40B5-A753-AC42DE97F6F4)
+* [Develop Applications with LRA](https://doc.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/develop-lra-applications.html#GUID-63827BB6-7993-40B5-A753-AC42DE97F6F4)
## Acknowledgements
-* **Author** - Sylaja Kannan, Principal User Assistance Developer
-* **Contributors** - Brijesh Kumar Deo
-* **Last Updated By/Date** - Sylaja, January 2023
+* **Author** - Sylaja Kannan, Consulting User Assistance Developer
+* **Contributors** - Brijesh Kumar Deo and Bharath MC
+* **Last Updated By/Date** - Sylaja, November 2023
diff --git a/tmm-run-sample-apps/run-xa-app/images/app-deployed.png b/tmm-run-sample-apps/run-xa-app/images/app-deployed.png
new file mode 100644
index 000000000..8a4aa1be5
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/app-deployed.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/database-service.png b/tmm-run-sample-apps/run-xa-app/images/database-service.png
new file mode 100644
index 000000000..9ebe89376
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/database-service.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/get-pods-status.png b/tmm-run-sample-apps/run-xa-app/images/get-pods-status.png
new file mode 100644
index 000000000..632601867
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/get-pods-status.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/ingress-gateway-ip-address.png b/tmm-run-sample-apps/run-xa-app/images/ingress-gateway-ip-address.png
index 5f92f8a56..1c0d69eb1 100644
Binary files a/tmm-run-sample-apps/run-xa-app/images/ingress-gateway-ip-address.png and b/tmm-run-sample-apps/run-xa-app/images/ingress-gateway-ip-address.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/list-pods.png b/tmm-run-sample-apps/run-xa-app/images/list-pods.png
new file mode 100644
index 000000000..a449c9c46
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/list-pods.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/minikube-start-error.png b/tmm-run-sample-apps/run-xa-app/images/minikube-start-error.png
new file mode 100644
index 000000000..996c9dbc3
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/minikube-start-error.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/images/pod-status.png b/tmm-run-sample-apps/run-xa-app/images/pod-status.png
new file mode 100644
index 000000000..881cad164
Binary files /dev/null and b/tmm-run-sample-apps/run-xa-app/images/pod-status.png differ
diff --git a/tmm-run-sample-apps/run-xa-app/run-xa-app.md b/tmm-run-sample-apps/run-xa-app/run-xa-app.md
index d2cc39fbe..bb4da9b34 100644
--- a/tmm-run-sample-apps/run-xa-app/run-xa-app.md
+++ b/tmm-run-sample-apps/run-xa-app/run-xa-app.md
@@ -1,36 +1,32 @@
-# Run an XA sample application
+# Run Transfer App which Uses XA
## Introduction
-Run the XA sample application to transfer an amount from one department to another and to understand how you can use Transaction Manager for Microservices (MicroTx) to coordinate XA transactions.
+Run the Transfer application, which uses the XA transaction protocol, to transfer an amount from one department to another. Run this application to understand how you can use Transaction Manager for Microservices (MicroTx) to coordinate XA transactions.
-The sample application code is available in the MicroTx distribution. The MicroTx library files are already integrated with the sample application code.
-
-Estimated Lab Time: *20 minutes*
+Estimated Lab Time: *10 minutes*
Watch the video below for a quick walk-through of the lab.
-[Run an LRA Sample Application](videohub:1_ta8uv36s)
+[Run the Transfer Application](videohub:1_ta8uv36s)
-### About XA Sample Application
+### About the Transfer Application
-The following figure shows a sample XA application, which contains several microservices.
-![Microservices in the XA sample applications](./images/xa-sample-app-simple.png)
+The following figure shows the various microservices that are available in the Transfer application.
+![Microservices in the Transfer Application](./images/xa-sample-app-simple.png)
-The sample application demonstrates how you can develop microservices that participate in XA transactions while using MicroTx to coordinate the transactions. When you run the Teller application, it withdraws money from one department and deposits it to another department by creating an XA transaction. Within the XA transaction, all actions such as withdraw and deposit either succeed, or they all are rolled back in case of a failure of any one or more actions.
+The Transfer application demonstrates how you can develop microservices that participate in XA transactions while using MicroTx to coordinate the transactions. When you run the Teller application, it withdraws money from one department and deposits it to another department by creating an XA transaction. Within the XA transaction, all actions such as withdraw and deposit either succeed, or they all are rolled back in case of a failure of any one or more actions.
-For more details, see [About the Sample XA Application](https://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/set-sample-applications.html#GUID-A181E2F7-00B4-421F-9EF9-DB8BF76DD53F) in the *Transaction Manager for Microservices Developer Guide*.
+For more details, see [About the Transfer Application](https://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/set-sample-applications.html#GUID-A181E2F7-00B4-421F-9EF9-DB8BF76DD53F) in the *Transaction Manager for Microservices Developer Guide*.
### Objectives
In this lab, you will:
-* Build container images for each microservice from the XA sample application code. After building the container images, the images are available in your Minikube container registry.
-* Update the `values.yaml` file, which contains the deployment configuration details for the XA sample application.
-* Install the Sample XA Application. While installing the sample application, Helm uses the configuration details you provide in the `values.yaml` file.
+* Start Minikube. When you start Minikube, the Transfer application is deployed and the database instances are created and populated with sample data.
* Deploy Kiali and Jaeger in your minikube cluster (Optional and if not already deployed)
-* Run an XA transaction to withdraw an amount from Department A and deposit it in Department B.
+* Run the Transfer application to start an XA transaction to withdraw an amount from Department A and deposit it in Department B.
* View service graph of the mesh and distributed traces to track requests (Optional)
-* View source code of the sample application (Optional)
+* View source code of the Transfer application (Optional)
### Prerequisites
@@ -41,7 +37,6 @@ This lab assumes you have:
* Get Started
* Lab 1: Prepare setup
* Lab 2: Environment setup
- * Lab 4: Provision an Oracle Autonomous Database for use as resource manager
* Logged in using remote desktop URL as an `oracle` user. If you have connected to your instance as an `opc` user through an SSH terminal using auto-generated SSH Keys, then you must switch to the `oracle` user before proceeding with the next step.
```text
@@ -50,167 +45,80 @@ This lab assumes you have:
```
-## Task 1: Build Container Images for Sample XA Applications
-
-The code for the XA sample application is available in the installation bundle in the `/home/oracle/OTMM/otmm-22.3/samples/xa/java` folder. Build container images for each microservice in the XA sample application.
+## Task 1: Start Minikube
-To build container images for each microservice in the sample:
+Code for the Transfer application is available in the MicroTx distribution. The MicroTx library files are already integrated with the application code. Container images, for each microservice in the application, are already built and available in your Minikube container registry. The `values.yaml` file is available in the `/home/oracle/OTMM/otmm-23.4.1/samples/xa/java/helmcharts/transfer` folder. This is the manifest file, which contains the deployment configuration details for the application.
-1. Run the following commands to build the container image for the Teller application.
+When you start Minikube, an instance of the Oracle Database 23c Free Release, with two PDBs, is deployed on Minikube. See [Oracle Database Free](https://www.oracle.com/database/free/get-started). Department 1 microservice, which is developed using the Helidon framework, uses PDB (`FREEPDB1`) as resource manager. Department 2 microservice, which is developed using the Spring Boot framework, uses another PDB (`FREEPDB2`) as resource manager. Each PDB contains an `accounts` table with `account_id` as the primary key. The `accounts` table is populated with the following sample data. The `values.yaml` file also contains the details to access the resource managers.
- ```text
-
- cd /home/oracle/OTMM/otmm-22.3/samples/xa/java/teller
-
- ```
-
- ```text
-
- minikube image build -t xa-java-teller:1.0 .
- ```
+| Account_ID | Amount |
+| ----------- | --------- |
+| account5 | 5000 |
+| account4 | 4000 |
+| account3 | 3000 |
+| account2 | 2000 |
+| account1 | 1000 |
+{: title="Amount in the various accounts"}
- When the image is successfully built, the following message is displayed.
+When you start Minikube, the Transfer application is deployed and the database instances are created and populated with sample data.
- **Successfully tagged xa-java-teller:1.0**
+Follow the instructions in this section to start Minikube, and then verify that all the resources are ready.
-2. Run the following commands to build the Docker image for the Department 1 application.
+1. Click **Activities** in the remote desktop window to open a new terminal.
- ```text
-
- cd /home/oracle/OTMM/otmm-22.3/samples/xa/java/department-helidon
-
- ```
+2. Run the following command to start Minikube.
```text
- minikube image build -t department-helidon:1.0 .
+ minikube start
```
- When the image is successfully built, the following message is displayed.
+ In rare situations, you may the error message shown below. This message indicates that the stack resources have not been successfully provisioned. In such cases, complete **Lab 6: Environment Clean Up** to delete the stack and clean up the resources. Then perform the steps in Lab 2 to recreate the stack.
- **Successfully tagged department-helidon:1.0**
+ ![minikube start error](./images/minikube-start-error.png)
-3. Run the following commands to build the Docker image for the Department 2 application.
-
- ```text
-
- cd /home/oracle/OTMM/otmm-22.3/samples/xa/java/department-spring
-
- ```
+3. Verify that the application has been deployed successfully.
```text
- minikube image build -t department-spring:1.0 .
+ helm list -n otmm
```
- When the image is successfully built, the following message is displayed.
-
- **Successfully tagged department-spring:1.0**
-
-The container images that you have created are available in your Minikube container registry.
-
-## Task 2: Update the values.yaml File
-
-The sample application files also contain the `values.yaml` file. This is the manifest file, which contains the deployment configuration details for the XA sample application.
-
-In the `values.yaml` file, specify the image to pull, the credentials to use when pulling the images, and details to access the resource managers. While installing the sample application, Helm uses the values you provide to pull the sample application images from the Minikube container registry.
-
-To provide the configuration and environment details in the `values.yaml` file:
-
-1. Open the values.yaml file, which is in the `/home/oracle/OTMM/otmm-22.3/samples/xa/java/helmcharts/transfer` folder, in any code editor. This file contains sample values. Replace these sample values with values that are specific to your environment.
-
-2. Provide the details of the ATP database instances, that you have created, in the `values.yaml` file, so that the Department A and Department B sample microservices can access the resource manager.
-
- * `connectString`: Enter the connect string to access the database in the following format. The host, port and service_name for the connection string can be found on the DB Connection Tab under Connection Strings as shown in screenshot below.
-
- **Syntax**
-
- ```text
-
- jdbc:oracle:thin:@tcps://:/?retry_count=20&retry_delay=3&wallet_location=Database_Wallet
-
- ```
-
- * `databaseUser`: Enter the user name to access the database, such as ADMIN. Use ADMIN if you created the tables and inserted sample data in the previous Lab.
- * `databasePassword`: Enter the password to access the database for the specific user. Use ADMIN user password if you created the tables and inserted sample data in the previous Lab.
- * `resourceManagerId`: A unique identifier (uuid) to identify a resource manager. Enter a random value for this lab as shown below.
-
- The `values.yaml` file contains many properties. For readability, only the resource manager properties for which you must provide values are listed in the following sample code snippet.
-
- ```text
-
- dept1:
- ...
- connectString: "jdbc:oracle:thin:@tcps://adb.us-ashburn-1.oraclecloud.com:1522/bbcldfxbtjvtddi_tmmwsdb3_tp.adb.oraclecloud.com?retry_count=20&retry_delay=3&wallet_location=Database_Wallet"
- databaseUser: db_user
- databasePassword: db_user_password
- resourceManagerId: 77e75891-27f4-49cf-a488-7e6fece865b7
- dept2:
- ...
- connectString: "jdbc:oracle:thin:@tcps://adb.us-ashburn-1.oraclecloud.com:1522/bdcldfxbtjvtddi_tmmwsdb4_tp.adb.oraclecloud.com?retry_count=20&retry_delay=3&wallet_location=Database_Wallet"
- databaseUser: db_user
- databasePassword: db_user_password
- resourceManagerId: 17ff43bb-6a4d-4833-a189-56ef023158d3
-
- ```
+ In the output, verify that the `STATUS` of the `sample-xa-app` is `deployed``.
- ![DB connection string](./images/db-connection-string.png)
-
-3. Save your changes.
-
-## Task 3: Install the Sample XA Application
+ **Example output**
-Install the XA sample application in the `otmm` namespace, where you have installed MicroTx. While installing the sample application, Helm uses the configuration details you provide in the values.yaml file.
+ ![Helm install success](./images/list-pods.png)
-1. Run the following commands to install the XA sample application.
+4. Verify that all resources, such as pods and services, are ready. Run the following command to retrieve the list of resources in the namespace `otmm` and their status.
```text
- cd /home/oracle/OTMM/otmm-22.3/samples/xa/java/helmcharts
+ kubectl get pods -n otmm
```
- ```text
-
- helm install sample-xa-app --namespace otmm transfer/ --values transfer/values.yaml
-
- ```
+ **Example output**
- Where, `sample-xa-app` is the name of the application that you want to install. You can provide another name to the installed application.
+ ![Status of pods in the otmm namespace](./images/pod-status.png)
-2. Verify that the application has been deployed successfully.
+5. Verify that the database instance is running. The database instance is available in the `oracledb` namespace. Run the following command to retrieve the list of resources in the namespace `oracledb` and their status.
```text
- helm list -n otmm
+ kubectl get pods -n oracledb
```
- In the output, verify that the `STATUS` of the `sample-xa-app` is `deployed.
-
**Example output**
- ![Helm install success](./images/helm-install-deployed.png)
+ ![Database instance details](./images/database-service.png)
-3. If you need to make any changes in the `values.yaml` file, then uninstall `sample-xa-app`. Update the `values.yaml` file, and then reinstall the `sample-xa-app`. Perform step 1 as described in this task again to reinstall `sample-xa-app`. and install it again by perform step 1. Otherwise, skip this step and go to the next step.
+It usually takes some time for the Database services to start running in the Minikube environment. Proceed with the remaining tasks only after ensuring that all the resources, including the database service, are ready and in the `RUNNING` status and the value of the **READY** field is `1/1`.
- ```text
-
- helm uninstall sample-xa-app --namespace otmm
-
- ```
-
-4. Verify that all resources, such as pods and services, are ready. Proceed to the next step only when all resources are ready. Run the following command to retrieve the list of resources in the namespace `otmm` and their status.
-
- ```text
-
- kubectl get all -n otmm
-
- ```
-
-## Task 4: Start a Tunnel
+## Task 2: Start a Minikube Tunnel
Before you start a transaction, you must start a Minikube tunnel.
@@ -250,12 +158,13 @@ Before you start a transaction, you must start a Minikube tunnel.
Note that, if you don't do this, then you must explicitly specify the IP address in the commands when required.
-## Task 5: Deploy Kiali and Jaeger in the cluster (Optional)
-**You can skip this task if you have already deployed Kiali and Jaeger in your cluster while performing Lab 3. However, ensure you have started Kiali and Jaeger dashboards as shown in steps 4 and 5.**
-This optional task lets you deploy Kiali and Jaeger in the minikube cluster to view the service mesh graph and enable distributed tracing.
-Distributed tracing enables tracking a request through service mesh that is distributed across multiple services. This allows a deeper understanding about request latency, serialization and parallelism via visualization.
-You will be able to visualize the service mesh and the distributed traces after you have run the sample application in the following task.
-The following commands can be executed to deploy Kiali and Jaeger. Kiali requires prometheus which should also be deployed in the cluster.
+## Task 3: Deploy Kiali and Jaeger in the cluster (Optional)
+
+**Skip this task if you have already deployed Kiali and Jaeger in your cluster while performing Lab 3. However, ensure you have started Kiali and Jaeger dashboards as shown in steps 4 and 5.**
+
+Use distributed tracing to understand how requests flow between MicroTx and the microservices. Use tools, such as Kiali and Jaeger, to track and trace distributed transactions in MicroTx. Kiali requires Prometheus, so deploy Prometheus in the same cluster.
+
+Run the following commands to deploy Kiali and Jaeger.
1. Deploy Kiali.
@@ -264,6 +173,7 @@ The following commands can be executed to deploy Kiali and Jaeger. Kiali require
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/kiali.yaml
```
+
2. Deploy Prometheus.
```text
@@ -271,6 +181,7 @@ The following commands can be executed to deploy Kiali and Jaeger. Kiali require
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/prometheus.yaml
```
+
3. Deploy Jaeger.
```text
@@ -278,31 +189,32 @@ The following commands can be executed to deploy Kiali and Jaeger. Kiali require
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/jaeger.yaml
```
-4. Start Kiali Dashboard. Open a new tab in the terminal window and execute the following command. Leave the terminal running. A browser window may pop up as well. Close the browser window.
+
+4. Start the Kiali dashboard. Open a new tab in the terminal window and then run the following command. Leave the terminal running. If a new browser window appears, close the browser window.
```text
istioctl dashboard kiali
```
- An output will show a URL on which you can access the kiali dashboard in a browser tab:
- http://localhost:20001/kiali
-5. Start Jaeger Dashboard. Open a new tab in the terminal window and execute the following command. Leave the terminal running. A browser window may pop up as well. Close the browser window.
+ A URL is displayed. Open the URL in a new tab in your browser to access the Kiali dashboard. For example, `http://localhost:20001/kiali`.
+
+5. Start the Jaeger dashboard. Open a new tab in the terminal window and then run the following command. Leave the terminal running. If a new browser window appears, close the browser window.
```text
istioctl dashboard jaeger
```
- An output will show a URL on which you can access the jaeger dashboard in a browser tab:
- http://localhost:16686
-## Task 6: Run an XA Transaction
+ A URL is displayed. Open the URL in a new tab in your browser to access the Jaeger dashboard. For example, `http://localhost:16686`.
+
+## Task 4: Run the Transfer Application
-Run an XA transaction When you run the Teller application, it withdraws money from one department and deposits it to another department by creating an XA transaction. Within the XA transaction, all actions such as withdraw and deposit either succeed, or they all are rolled back in case of a failure of any one or more actions.
+When you run the Transfer application, it starts an XA transaction. The Teller application is the transaction initiator service, it initiates the transaction. When the Teller application runs, it withdraws money from Department A and deposits it to Department B by creating an XA transaction. Within the XA transaction, all actions such as withdraw and deposit either succeed, or they all are rolled back in case of a failure of any one or more actions.
-1. Before you start the transaction, run the following commands to check the balance in Department 1 and Department 2 accounts.
+1. Before you start the transaction, run the following commands to check the balance in the Department 1 and Department 2 accounts.
**Example command to check balance in Department 1**
@@ -380,31 +292,35 @@ Run an XA transaction When you run the Teller application, it withdraws money fr
--request GET http://$CLUSTER_IPADDR/dept1/account1 | jq
```
-## Task 7: View Service Mesh graph and Distributed Traces (Optional)
-You can perform this task only if you have performed Task 5 or have Kiali and Jaeger deployed in your cluster.
-To visualize what happens behind the scenes and how a trip booking request is processed by the distributed services, you can use the Kiali and Jaeger Dashboards that you started in Task 3.
-1. Open a new browser tab and navigate to the Kiali dashboard URL - http://localhost:20001/kiali
-2. Select Graph for the otmm namespace.
-3. Open a new browser tab and navigate to the Jaeger dashboard URL - http://localhost:16686
-4. Select istio-ingressgateway.istio-system from the Service list. You can see the list of traces with each trace representing a request.
-5. Select one of the traces to view.
-
-## Task 8: View source code of the sample application (Optional)
-The source code of the sample application is present in folder: /home/oracle/OTMM/otmm-22.3/samples/xa/java
-- Teller Service Source code: /home/oracle/OTMM/otmm-22.3/samples/xa/java/teller
-- Department 1 Service Source code: /home/oracle/OTMM/otmm-22.3/samples/xa/java/department-helidon
-- Department 2 Service Source code: /home/oracle/OTMM/otmm-22.3/samples/xa/java/department-spring
+
+## Task 5: View the Service Mesh Graph and Distributed Traces (Optional)
+
+You can perform this task only if you have performed Task 3 or if Kiali and Jaeger is deployed in your cluster.
+To visualize what happens behind the scenes and how the amount transfer request is processed by the distributed services, you can use the Kiali and Jaeger Dashboards that you had started in Task 3.
+
+1. Open a new browser tab and navigate to the Kiali dashboard URL. For example, `http://localhost:20001/kiali`.
+2. Select Graph for the `otmm` namespace.
+3. Open a new browser tab and navigate to the Jaeger dashboard URL. For example, `http://localhost:16686`.
+4. From the **Service** drop-down list, select **istio-ingressgateway.istio-system**.
+5. Click **Find Traces**. You can see the list of traces with each trace representing a request.
+6. Select one of the traces to view.
+
+## Task 6: View Source Code of the Transfer Application (Optional)
+
+The source code of the application is present in folder: /home/oracle/OTMM/otmm-23.4.1/samples/xa/java
+- Teller Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/xa/java/teller
+- Department 1 Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/xa/java/department-helidon
+- Department 2 Service Source code: /home/oracle/OTMM/otmm-23.4.1/samples/xa/java/department-spring
You can use the VIM editor to view the source code files. You can also use the Text Editor application to view the source code files.
To bring up the Text Editor, click on Activities (top left) -> Show Applications -> Text Editor. Inside Text Editor, select Open a File and browse to the source code files in the folders shown above.
-
## Learn More
-* [Develop Applications with XA](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/22.3/tmmdg/develop-xa-applications.html#GUID-D9681E76-3F37-4AC0-8914-F27B030A93F5)
+* [Develop Applications with XA](http://docs.oracle.com/en/database/oracle/transaction-manager-for-microservices/23.4.1/tmmdg/develop-xa-applications.html#GUID-D9681E76-3F37-4AC0-8914-F27B030A93F5)
## Acknowledgements
-* **Author** - Sylaja Kannan, Principal User Assistance Developer
-* **Contributors** - Brijesh Kumar Deo
-* **Last Updated By/Date** - Sylaja, January 2023
+* **Author** - Sylaja Kannan, Consulting User Assistance Developer
+* **Contributors** - Brijesh Kumar Deo and Bharath MC
+* **Last Updated By/Date** - Sylaja Kannan, November 2023
diff --git a/tmm-run-sample-apps/workshops/desktop/manifest.json b/tmm-run-sample-apps/workshops/desktop/manifest.json
index 949389b84..6041af776 100644
--- a/tmm-run-sample-apps/workshops/desktop/manifest.json
+++ b/tmm-run-sample-apps/workshops/desktop/manifest.json
@@ -13,15 +13,11 @@
"filename": "https://oracle-livelabs.github.io/common/labs/remote-desktop/using-novnc-remote-desktop.md"
},
{
- "title": "Lab 1: Run an LRA sample application",
+ "title": "Lab 1: Run Travel agent app which uses LRA",
"filename": "../../run-lra-app/run-lra-app.md"
},
{
- "title": "Lab 2: Provision Autonomous Databases for use as resource manager",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 3: Run an XA sample application",
+ "title": "Lab 2: Run Transfer app which uses XA",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
diff --git a/tmm-run-sample-apps/workshops/levelup23/manifest.json b/tmm-run-sample-apps/workshops/levelup23/manifest.json
index 25564544b..b9f32425c 100644
--- a/tmm-run-sample-apps/workshops/levelup23/manifest.json
+++ b/tmm-run-sample-apps/workshops/levelup23/manifest.json
@@ -27,17 +27,13 @@
"filename": "../../run-lra-app/run-lra-app.md"
},
{
- "title": "Lab 4: Provision Autonomous Databases for use as resource manager",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 5: Run an XA sample application",
+ "title": "Lab 4: Run an XA sample application",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
"description": "Cleanly dispose of all OCI resources created by ORM for the workshop, and delete the stack",
"filename": "https://oracle-livelabs.github.io/common/labs/cleanup-stack/cleanup-stack.md",
- "title": "Lab 6: Environment Cleanup"
+ "title": "Lab 5: Environment Cleanup"
},
{
"title": "Need help?",
diff --git a/tmm-run-sample-apps/workshops/sandbox/manifest.json b/tmm-run-sample-apps/workshops/sandbox/manifest.json
index 58a1edd59..960e41ed3 100644
--- a/tmm-run-sample-apps/workshops/sandbox/manifest.json
+++ b/tmm-run-sample-apps/workshops/sandbox/manifest.json
@@ -13,16 +13,11 @@
"filename": "https://oracle-livelabs.github.io/common/labs/verify-compute/verify-compute-ssh-and-novnc.md"
},
{
- "title": "Lab 2: Run an LRA sample application",
+ "title": "Lab 2: Run Travel agent app which uses LRA",
"filename": "../../run-lra-app/run-lra-app.md"
},
{
- "title": "Lab 3: Provision Autonomous Databases for Use as resource manager",
- "type": "sandbox",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 4: Run an XA sample application",
+ "title": "Lab 3: Run Transfer app which uses XA",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
diff --git a/tmm-run-sample-apps/workshops/tenancy/manifest.json b/tmm-run-sample-apps/workshops/tenancy/manifest.json
index 39600be97..511f3d0c0 100644
--- a/tmm-run-sample-apps/workshops/tenancy/manifest.json
+++ b/tmm-run-sample-apps/workshops/tenancy/manifest.json
@@ -20,25 +20,20 @@
{
"title": "Lab 2: Environment setup",
"description": "How to provision the workshop environment and connect to it",
- "filename": "https://oracle-livelabs.github.io/common/labs/setup-compute-generic/setup-compute-novnc-ssh.md"
+ "filename": "../../env-setup/setup-compute-novnc-ssh.md"
},
{
- "title": "Lab 3: Run an LRA sample application",
+ "title": "Lab 3: Run Travel agent app which uses LRA",
"filename": "../../run-lra-app/run-lra-app.md"
},
{
- "title": "Lab 4: Provision Autonomous Databases for use as resource manager",
- "type": "tenancy",
- "filename": "../../adb-provision/adb-provision.md"
- },
- {
- "title": "Lab 5: Run an XA sample application",
+ "title": "Lab 4: Run Transfer app which uses XA",
"filename": "../../run-xa-app/run-xa-app.md"
},
{
"description": "Cleanly dispose of all OCI resources created by ORM for the workshop, and delete the stack",
"filename": "https://oracle-livelabs.github.io/common/labs/cleanup-stack/cleanup-stack.md",
- "title": "Lab 6: Environment Cleanup"
+ "title": "Lab 5: Environment Cleanup"
},
{
"title": "Need help?",
diff --git a/xmldb/provision/provision.md b/xmldb/provision/provision.md
index 1fa6309d8..d45f99b78 100644
--- a/xmldb/provision/provision.md
+++ b/xmldb/provision/provision.md
@@ -76,7 +76,7 @@ In this lab, you will:
- __Choose a compartment__ - Use the default compartment that includes your user id.
- - __Display Name__ - Enter a memorable name for the database for display purposes. For this lab, use __TEXTDB__.
+ - __Display Name__ - Enter a memorable name for the database for display purposes. For this lab, use __XMLDB__.
- __Database Name__ - Use letters and numbers only, starting with a letter. Maximum length is 14 characters. (Underscores not initially supported.) For this lab, use __TEXTDB__.
diff --git a/xmldb/queries/queries.md b/xmldb/queries/queries.md
index 2174d091e..654c1f089 100644
--- a/xmldb/queries/queries.md
+++ b/xmldb/queries/queries.md
@@ -49,13 +49,13 @@ The W3C XQuery link: [W3C Xquery] (https://www.w3.org/TR/xquery-31/)
SQL/XML functions, XMLQuery, XMLTable, XMLExists, and XMLCast, are defined by the SQL/XML standard as a general interface between SQL and XQuery languages. Using these functions, you can construct XML data using relational data, query relational data as if it were XML data, and construct relational data from XML data.
Here is a short overview of these SQL/XML functions:
-- XMLQuery - Use this function to construct or query XML data. It takes an XQuery expression as an argument and returns the result of evaluating the XQuery expression, as an XMLType instance.
+- XMLQuery - Use this function to construct or query XML data. It takes an XQuery expression as an argument and returns the result of evaluating the XQuery expression, as an XMLType instance. (Example in Task 4.3)
-- XMLTable - Use this function XMLTable to decompose the result of an XQuery-expression evaluation into the relational rows and columns of a new, virtual table. You can insert this data into a pre-existing database table, or you can query it using SQL — in a join expression, for example.
+- XMLTable - Use this function XMLTable to decompose the result of an XQuery-expression evaluation into the relational rows and columns of a new, virtual table. You can insert this data into a pre-existing database table, or you can query it using SQL — in a join expression, for example. (Example in Task 4.5)
-- XMLExists - Use this function to check whether a given XQuery expression returns a non-empty XQuery sequence. If so, the function returns TRUE. Otherwise, it returns FALSE.
+- XMLExists - Use this function to check whether a given XQuery expression returns a non-empty XQuery sequence. If so, the function returns TRUE. Otherwise, it returns FALSE. (Example in Task 4.2)
-- XMLCast - Use this function to cast an XQuery value to a SQL data type.
+- XMLCast - Use this function to cast an XQuery value to a SQL data type. (Example in Task 4.4)
Here is the link for more information: [XQL/XML functions] (https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xquery-and-XML-DB.html#GUID-4805CF1C-A00D-4B88-AF2E-00A9DB6F3392)
diff --git a/xmldb/update/update.md b/xmldb/update/update.md
index 9945c8b61..cebd6343c 100644
--- a/xmldb/update/update.md
+++ b/xmldb/update/update.md
@@ -1,7 +1,7 @@
# Update XML content
## Introduction
-This lab will use the SQL Workshop in Database Actions from the Autonomous Transaction Processing page. In this lab, we will explore XQuery to update XML content in Oracle XML DB. XQuery is one of the main ways that you interact with XML data in Oracle XML DB. It is the W3C language designed for querying and updating XML data.
+This lab will use the SQL Workshop in Database Actions from the Autonomous Transaction Processing page. In this lab, we will explore XQuery to update and manipulate XML content in Oracle XML DB. XQuery is one of the main ways that you interact with XML data in Oracle XML DB. It is the W3C language designed for querying and updating XML data.
The support for the XQuery Language is provided through a native implementation of SQL/XML functions: XMLQuery, XMLTable, XMLExists, and XMLCast. These SQL/XML functions are defined by the SQL/XML standard as a general interface between the SQL and XQuery languages.
@@ -45,13 +45,13 @@ The W3C XQuery link: [W3C Xquery] (https://www.w3.org/TR/xquery-31/)
SQL/XML functions, XMLQuery, XMLTable, XMLExists, and XMLCast, are defined by the SQL/XML standard as a general interface between SQL and XQuery languages. Using these functions, you can construct XML data using relational data, query relational data as if it were XML data, and construct relational data from XML data.
Here is a short overview of these SQL/XML functions:
-- XMLQuery - Use this function to construct or query XML data. It takes an XQuery expression as an argument and returns the result of evaluating the XQuery expression, as an XMLType instance.
+- XMLQuery - Use this function to construct or query XML data. It takes an XQuery expression as an argument and returns the result of evaluating the XQuery expression, as an XMLType instance. (Example in Task 4.2)
- XMLTable - Use this function XMLTable to decompose the result of an XQuery-expression evaluation into the relational rows and columns of a new, virtual table. You can insert this data into a pre-existing database table, or you can query it using SQL — in a join expression, for example.
-- XMLExists - Use this function to check whether a given XQuery expression returns a non-empty XQuery sequence. If so, the function returns TRUE. Otherwise, it returns FALSE.
+- XMLExists - Use this function to check whether a given XQuery expression returns a non-empty XQuery sequence. If so, the function returns TRUE. Otherwise, it returns FALSE. (Example in Task 4.2)
-- XMLCast - Use this function to cast an XQuery value to a SQL data type.
+- XMLCast - Use this function to cast an XQuery value to a SQL data type. (Example in Task 4.5)
Here is the link for more information: [XQL/XML functions] (https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xquery-and-XML-DB.html#GUID-4805CF1C-A00D-4B88-AF2E-00A9DB6F3392)