The scripts in this section will launch a GKE cluster, setup Cloud Filestore for persistence, and configure a bastion
host that you can login and start a Hyperledger Fabric network. The configuration file env.sh specifies the Google cloud project, region and number of GKE nodes of the GKE cluster, e.g., 3 nodes are used by the default configuration.
Install Google Cloud SDK: https://cloud.google.com/sdk/docs/quickstarts, which includes the gcloud
utility that we installed in $HOME
directory and used to script the configuration.
To get started, you need to create a Google project with billing enabled. On GCP Console, click "My First Project", then in the popup window, click "NEW PROJECT" at top-right corner, then specify a new default project, e.g., fab-project-001, then click "CREATE" to create it. We shall use this project to setup everything, so you can delete the project and associated artifacts when they are no longer used.
You can then login from your local workstation to setup the work environment:
gcloud auth login
gcloud projects list
Select your Google account in a pop-up browser window to login to Google Cloud. The second command should list the project that we created using the GCP Console. Note that gcloud
stores config data in $HOME/.config/gcloud
.
Edit env.sh to upldate GCP_PROJECT
to the name of the project created above or any existing project, and then create and start the GKE cluster with all defaults:
cd ./gcp
./gcp-util.sh create
This script accepts 2 parameters for you to specify a different Google cloud environment, e.g.,
./gcp-util.sh create -n fab -r us-west1-c
would create an GKE cluster with name prefix of fab
, in the GCP zone of us-west1-c
.
Wait 7-8 minutes for the cluster nodes to startup. When the cluster is up, it will print a line, such as:
gcloud compute ssh --ssh-key-file ./config/fab-key fab@fab-bastion
You can use this command to login to the bastion
host and create a Hyperledger Fabric network using a GKE cluster. Note that the ssh
keypair for accessing the bastion
host is generated in the config folder.
Log on to the bastion
host, and login to Google Cloud, e.g.,
gcloud compute ssh --ssh-key-file ./config/fab-key fab@fab-bastion
gcloud auth login
It will display a authentication url on the bastion window. Cut and paste the url from bastion to a browser to login, then cut and paste the verification code from browser to bastion window.
In the bastion window, use the following command to download GKE cluster config:
source ./env.sh
gcloud container clusters get-credentials ${GKE_CLUSTER} --zone ${GCP_ZONE}
You can then verify the following configurations.
df
command should show that a Google CloudFilestore
is already mounted at/mnt/share
;kubectl get pod,svc --all-namespaces
should show you the Kubernetes system services and PODs;ls ~
should show you that the latest code of this project is already downloaded at$HOME/fabric-operation
.
Following steps will start and smoke test the default Hyperledger Fabric network with 2 peers, and 3 orderers using etcd raft
consensus. You can learn more details about these commands here.
cd ./fabric-operation/namespace
./k8s-namespace.sh create
This command creates a namespace for the default Fabric operator company, netop1
, and sets it as the default namespace. The option -t gcp
specifies the working environment for GCP
, and it is optional since the script will automatically detect the environment if it is not specified. You can verify this step using the following commands:
kubectl get namespaces
should show a list of namespaces, including the new namespacenetop1
;kubectl config current-context
should show that the default namespace is set tonetop1
.
cd ../ca
./ca-server.sh start
# wait until 3 ca server and client PODs are in running state
./ca-crypto.sh bootstrap
This command starts 2 CA servers and a CA client, and generates crypto data according to the network specification, netop1.env. You can verify the result using the following commands:
kubectl get pods
should list 3 running PODs:ca-server
,tlsca-server
, andca-client
;ls /mnt/share/netop1.com/
should list folders containing crypto data, i.e.,canet
,cli
,crypto
,gateway
,namespace
,orderers
,peers
, andtool
.
cd ../msp
./msp-util.sh start
# wait until the tool POD is in running state
./msp-util.sh bootstrap
This command starts a Kubernetes POD to generate the genesis block and transaction for creating a test channel mychannel
based on the network specification. You can verify the result using the following commands:
kubectl get pods
should list a running PODtool
;ls /mnt/share/netop1.com/tool
should show the generated artifacts:etcdraft-genesis.block
,mychannel.tx
,mychannel-anchors.tx
, andconfigtx.yaml
.
cd ../network
./network.sh start
This command starts the orderers and peers using the crypto and genesis block created in the previous steps. You can verify the network status using the following commands:
kubectl get pods
should list 3 running orderers and 2 running peers;kubectl logs orderer-2
should show that a raft leader is elected by all 3 orderer nodes;kubectl logs peer-1 -c peer
should show the logs ofpeer-1
, that shows its successfully completed gossip communications withpeer-0
.ls /mnt/share/netop1.com/orderers/orderer-0/data
shows persistent storage of theorderer-0
, similar to other orderer nodes;ls /mnt/share/netop1.com/peers/peer-0/data
shows persistent storage of thepeer-0
, similar to other peer nodes.
cd ../network
./network.sh test
This command creates the test channel mychannel
, installs and instantiates a test chaincode, and then executes a transaction and a query to verify the working network. You can verify the result as follows:
- The last result printed out by the test should be
90
; - Orderer data folder, e.g.,
/mnt/share/netop1.com/orderers/orderer-0/data
would show a block file added under the chain of a new channelchains/mychannel
; - Peer data folder, e.g.,
/mnt/share/netop1.com/peers/peer-0/data
would show a new chaincodemycc.1.0
added to thechaincodes
folder, and a transaction block file created underledgersData/chains/chains/mychannel
.
Refer gateway for more details on how to build and start a REST API service for applications to interact with one or more Fabric networks. The following commands can be used on the bastion host to start a gateway service that exposes a Swagger-UI.
cd ../service
# build the gateway service from source code, which creates executable 'gateway-linux'
make dist
# config and start gateway service for GCP
./gateway.sh start
The last command started 2 PODs to run the gateway service, and created a load-balancer service with a publicly accessible port. The load-balancer port is open to public by default, which is convenient for dev and test. To restrict the access of source IPs, you can add a field loadBalancerSourceRanges
to the gateway service definition. Refer the link for more details.
The URL of the load-balancer is printed by the script as, e.g.,
http://34.82.144.78:7081/swagger
Copy and paste the URL (your actual URL will be different) into a Chrome web-browser, and use it to test the sample chaincode as described in gateway.
Refer dovetail for more details about Project Dovetail, which is a visual programming tool for modeling Hyperledger Fabric chaincode and client apps.
A Dovetail chaincode model, e.g., marble.json is a JSON file that implements a sample chaincode by using the TIBCO Flogo visual modeler. Use the following script to build and instantiate the chaincode.
# create chaincode package
cd ${HOME}/fabric-operation/msp
./msp-util.sh build-cds -m ../dovetail/samples/marble/marble.json
# install and instantiate chaincode
cd ../network
./network.sh install-chaincode -n peer-0 -f marble_cc_1.0.cds
./network.sh install-chaincode -n peer-1 -f marble_cc_1.0.cds
./network.sh instantiate-chaincode -n peer-0 -c mychannel -s marble_cc -v 1.0 -m '{"Args":["init"]}'
You can send test transactions by using the Swagger UI of the gateway service, e.g., insert a new marble by posting the following transaction:
{
"connection_id": "16453564131388984820",
"type": "INVOKE",
"chaincode_id": "marble_cc",
"transaction": "initMarble",
"parameter": [
"marble1","blue","35","tom"
]
}
By using the same Flogo
modeling UI, we can implement a client app, e.g., marble_client.json, that updates or queries the Fabric distributed ledger by using the marble
chaincode. Use the following script to build and run the client app as a Kubernetes service.
# generate network config if skipped the previous gateway test
cd ${HOME}/fabric-operation/service
./gateway.sh config
# build and start client using the generated network config
cd ../dovetail
./dovetail.sh config-app -m samples/marble_client/marble_client.json
./dovetail.sh start-app -m marble_client.json
The above command will start 2 instances of the marble-client
and expose a load-balancer
end-point for other applications to invoke the service. Once the script completes successfully, it will print out the service end-point as, e.g.,
access marble-client servcice at http://34.83.160.221:7091
You can use this end-point to update or query the blockchain ledger. marble.postman_collection.json contains a set of REST messages that you can import to Postman and invoke the marble-client
REST APIs.
Stop the client app after tests complete:
./dovetail.sh stop-app -m marble_client.json
cd ../network
./network.sh shutdown -d
This command shuts down orderers and peers, and the last argument -d
means to delete all persistent data as well. If you do not use the argument -d
, it would keep the test ledger file in the Google Cloud Filestore
, and so it can be loaded when the network restarts. You can verify the result using the following command.
kubectl get svc,pod
should not list any running orderers or peers;- The orderers and peers' persistent data folder, e.g.,
/mnt/share/netop1.com/peers/peer-0/data
would be deleted if the option-d
is used.
You can exit from the bastion
host, and clean up every thing created in GCP when they are no longer used, i.e.,
cd ./gcp
./gcp-util.sh cleanup -n fab -r us-west1-c
This will clean up the GKE cluster and the Cloud Filestore created in the previous steps. Make sure that you supply the same parameters as that of the previous gcp-util.sh create
command if they are different from the default values.
If your local workstation has kubctl
installed, and you want to execute kubectl
commands directly from the localhost, instead of going through the bastion
host, you can set the env,
export KUBECONFIG=/path/to/fabric-operation/gcp/config/config-fab.yaml
where the /path/to
is the location of this project on your localhost, and config-fab.yaml
is named after the ENV_NAME
specified in env.sh
. The file is created for you when you execute gcp-util.sh create
, and it is valid only while the GKE cluster is running.
You can then use kubectl
commands against the GKE cluster from your localhost directly, e.g.,
kubectl get pod,svc --all-namespaces
The containers created by the scripts will use the name of a specified operating company, e.g., netop1
, as the Kubernetes namespace. To save you from repeatedly typing the namespace in kubectl
commands, you can set netop1
as the default namespace by using the following commands:
kubectl config view
kubectl config set-context netop1 --namespace=netop1 --cluster=fab-cluster --user=clusterUser_fabRG_fabAKSCluster
kubectl config use-context netop1
Note to replace the values of cluster
and user
in the second command by the corresponding output from the first command. This configuration is automatically done on the bastion host when the k8s-namespace.sh create
script is called.