Table of Contents
- Upgrading your components
- Overview
- Environment variables for the binaries
- Ledger backup and restore
- Upgrade ordering nodes
- Upgrade containers
- Upgrade the peers
- Set Command Environment Variable
- Upgrade containers
- Verify peer upgrade completion
- Upgrade your CAs
- Upgrade Node SDK clients
- Upgrading CouchDB
- Upgrade Node chaincode shim
- Updating the capability level of a channel
- Orderer system channel capabilities
- Set environment variables
- Orderer group
- Channel group
- Enable capabilities on existing channels
- Orderer group
- Channel group
- Application group
- Verify a transaction after capabilities have been enabled
- Enabling the new chaincode lifecycle
- Edit the channel configurations
- Application channel updates
If you are familiar with previous releases of Hyperledger Fabric, you’re aware that upgrading the nodes and channels to the latest version of Fabric is, at a high level, a four step process.
- Backup the ledger and MSPs.
- Upgrade the orderer binaries in a rolling fashion to the latest Fabric version.
- Upgrade the peer binaries in a rolling fashion to the latest Fabric version.
- Update the orderer system channel and any application channels to the latest capability levels, where available. Note that some releases will have capabilities in all groups while other releases may have few or even no new capabilities at all.
For a look at how these upgrade processes are accomplished, please consult these tutorials:
- Upgrading your components. Components should be upgraded to the latest version before updating any capabilities.
- Updating the capability level of a channel. Completed after updating the versions of all nodes.
- Enabling the new chaincode lifecycle. Necessary to add organisation specific endorsement policies central to the new chaincode lifecycle for Fabric v2.0.
As the upgrading of nodes and increasing the capability levels of channels is by now considered a standard Fabric process, we will not show the specific commands for upgrading to the newest release. Similarly, there is no script in the fabric-samples repo that will upgrade a sample network from the previous release to this one, as there has been for previous releases.
Upgrading your components
CORE_PEER_TLS_ENABLED=true
CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false
CORE_PEER_PROFILE_ENABLED=true
CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
CORE_PEER_ID=peer0.org1.example.com
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LISTENADDRESS=0.0.0.0:7051
CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID=Org1MSP
We will go through the process of upgrading components now.
CORE_PEER_TLS_ENABLED=true
CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false
CORE_PEER_PROFILE_ENABLED=true
CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
CORE_PEER_ID=peer0.org1.example.com
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LISTENADDRESS=0.0.0.0:7051
CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID=Org1MSP
Overview
At a high level, upgrading the binary level of your nodes is a two-step process:
- Backup the ledger and MSPs.
- Upgrade binaries to the latest version.
If you own both ordering nodes and peers, it is a best practice to upgrade the ordering nodes first. If a peer falls behind or is temporarily unable to process certain transactions, it can always catch up. If enough ordering nodes go down, by comparison, a network can effectively cease to function.
This topic presumes that these steps will be performed using Docker CLI commands. If you are utilizing a different deployment method (Rancher, Kubernetes, OpenShift, etc) consult their documentation on how to use their CLI.
For native deployments, note that you will also need to update the YAML configuration file for the nodes (for example, the orderer.yaml file) with the one from the release artifacts.
To do this, backup the orderer.yaml or core.yaml file (for the peer) and replace it with the orderer.yaml or core.yaml file from the release artifacts. Then port any modified variables from the backed up orderer.yaml or core.yaml to the new one. Using a utility like diff may be helpful. Note that updating the YAML file from the release rather than updating your old YAML file is the recommended way to update your node YAML files, as it reduces the likelihood of making errors.
This tutorial assumes a Docker deployment where the YAML files will be baked into the images and environment variables will be used to overwrite the defaults in the configuration files.
Environment variables for the binaries
When you deploy a peer or an ordering node, you had to set a number of environment variables relevant to its configuration. A best practice is to create a file for these environment variables, give it a name relevant to the node being deployed, and save it somewhere on your local file system. That way you can be sure that when upgrading the peer or ordering node you are using the same variables you set when creating it.
Here’s a list of some of the peer environment variables (with sample values — as you can see from the addresses, these environment variables are for a network deployed locally) that can be set that be listed in the file. Note that you may or may not need to set all of these environment variables:
CORE_PEER_TLS_ENABLED=true
CORE_PEER_GOSSIP_USELEADERELECTION=true
CORE_PEER_GOSSIP_ORGLEADER=false
CORE_PEER_PROFILE_ENABLED=true
CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
CORE_PEER_ID=peer0.org1.example.com
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LISTENADDRESS=0.0.0.0:7051
CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID=Org1MSP
Here are some ordering node variables (again, these are sample values) that might be listed in the environment variable file for a node. Again, you may or may not need to set all of these environment variables:
ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
ORDERER_GENERAL_GENESISMETHOD=file
ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
ORDERER_GENERAL_LOCALMSPID=OrdererMSP
ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
ORDERER_GENERAL_TLS_ENABLED=true
ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
However you choose to set your environment variables, note that they will have to be set for each node you want to upgrade.
Ledger backup and restore
While we will demonstrate the process for backing up ledger data in this tutorial, it is not strictly required to backup the ledger data of a peer or an ordering node (assuming the node is part of a larger group of nodes in an ordering service). This is because, even in the worst case of catastrophic failure of a peer (such as a disk failure), the peer can be brought up with no ledger at all. You can then have the peer re-join the desired channels and as a result, the peer will automatically create a ledger for each of the channels and will start receiving the blocks via regular block transfer mechanism from either the ordering service or the other peers in the channel. As the peer processes blocks, it will also build up its state database.
However, backing up ledger data enables the restoration of a peer without the time and computational costs associated with bootstrapping from the genesis block and reprocessing all transactions, a process that can take hours (depending on the size of the ledger). In addition, ledger data backups may help to expedite the addition of a new peer, which can be achieved by backing up the ledger data from one peer and starting the new peer with the backed up ledger data.
This tutorial presumes that the file path to the ledger data has not been changed from the default value of /var/hyperledger/production/ (for peers) or /var/hyperledger/production/orderer (for ordering nodes). If this location has been changed for your nodes, enter the path to the data on your ledgers in the commands below.
Note that there will be data for both the ledger and chaincodes at this file location. While it is a best practice to backup both, it is possible to skip the stateLeveldb, historyLeveldb, chains/index folders at /var/hyperledger/production/ledgersData. While skipping these folders reduces the storage needed for the backup, the peer recovery from the backed up data may take more time as these ledger artifacts will be re-constructed when the peer starts.
If using CouchDB as state database, there will be no stateLeveldb directory, as the state database data would be stored within CouchDB instead. But similarly, if peer starts up and finds CouchDB databases are missing or at lower block height (based on using an older CouchDB backup), the state database will be automatically re-constructed to catch up to current block height. Therefore, if you backup peer ledger data and CouchDB data separately, ensure that the CouchDB backup is always older than the peer backup.
Upgrade ordering nodes
Orderer containers should be upgraded in a rolling fashion (one at a time). At a high level, the ordering node upgrade process goes as follows:
- Stop the ordering node.
- Back up the ordering node’s ledger and MSP.
- Remove the ordering node container.
- Launch a new ordering node container using the relevant image tag.
Repeat this process for each node in your ordering service until the entire ordering service has been upgraded.
Set command environment variables
Export the following environment variables before attempting to upgrade your ordering nodes.
- ORDERER_CONTAINER: the name of your ordering node container. Note that you will need to export this variable for each node when upgrading it.
- LEDGERS_BACKUP: the place in your local filesystem where you want to store the ledger being backed up. As you will see below, each node being backed up will have its own subfolder containing its ledger. You will need to create this folder.
- IMAGE_TAG: the Fabric version you are upgrading to. For example, 2.0.
Note that you will have to set an image tag to ensure that the node you are starting using the correct images. The process you use to set the tag will depend on your deployment method.
Upgrade containers
Let’s begin the upgrade process by bringing down the orderer:
docker stop $ORDERER_CONTAINER
Once the orderer is down, you’ll want to backup its ledger and MSP:
docker cp $ORDERER_CONTAINER:/var/hyperledger/production/orderer/ ./$LEDGERS_BACKUP/$ORDERER_CONTAINER
Then you can launch the new ordering node container by issuing:
docker run -d -v /opt/backup/$ORDERER_CONTAINER/:/var/hyperledger/production/orderer/ -v /opt/msp/:/etc/hyperledger/fabric/msp/ --env-file ./env<name of node>.list --name $ORDERER_CONTAINER hyperledger/fabric-orderer:$IMAGE_TAG orderer
Once all of the ordering nodes have come up, you can move on to upgrading your peers.
Upgrade the peers
Peers should, like the ordering nodes, be upgraded in a rolling fashion (one at a time). As mentioned during the ordering node upgrade, ordering nodes and peers may be upgraded in parallel, but for the purposes of this tutorial we’ve separated the processes out. At a high level, we will perform the following steps:
- Stop the peer.
- Back up the peer’s ledger and MSP.
- Remove chaincode containers and images.
- Remove the peer container.
- Launch a new peer container using the relevant image tag.
Set Command Environment Variable
Export the following environment variables before attempting to upgrade your peers.
- PEER_CONTAINER: the name of your peer container. Note that you will need to set this variable for each node.
- LEDGERS_BACKUP: the place in your local file system where you want to store the ledger being backed up. As you will see below, each node being backed up will have its own subfolder containing its ledger. You will need to create this folder.
- IMAGE_TAG: the Fabric version you are upgrading to. For example, 2.0.
Note that you will have to set an image tag to ensure that the node you are starting is using the correct images. The process you use to set the tag will depend on your deployment method.
Repeat this process for each of your peers until every node has been upgraded.
Upgrade containers
Let’s bring down the first peer with the following command:
docker stop $PEER_CONTAINER
We can then backup the peer’s ledger and MSP:
docker cp $PEER_CONTAINER:/var/hyperledger/production ./$LEDGERS_BACKUP/$PEER_CONTAINER
With the peer stopped and the ledger backed up, remove the peer chaincode containers:
CC_CONTAINERS=$(docker ps | grep dev-$PEER_CONTAINER | awk '{print $1}') if [ -n "$CC_CONTAINERS" ] ; then docker rm -f $CC_CONTAINERS ; fi
And the peer chaincode images:
CC_IMAGES=$(docker images | grep dev-$PEER | awk '{print $1}') if [ -n "$CC_IMAGES" ] ; then docker rmi -f $CC_IMAGES ; fi
Then remove the peer container itself (since we will be giving our new container the same name as our old one):
docker rm -f $PEER_CONTAINER
Then you can launch the new peer container by issuing:
docker run -d -v /opt/backup/$PEER_CONTAINER/:/var/hyperledger/production/ -v /opt/msp/:/etc/hyperledger/fabric/msp/ --env-file ./env<name of node>.list --name $PEER_CONTAINER hyperledger/fabric-peer:$IMAGE_TAG peer node start
You do not need to relaunch the chaincode container. When the peer gets a request for a chaincode, (invoke or query), it first checks if it has a copy of that chaincode running. If so, it uses it. Otherwise, as in this case, the peer launches the chaincode (rebuilding the image if required).
Verify peer upgrade completion
It’s a best practice to ensure the upgrade has been completed properly with a chaincode invoke. Note that it should be possible to verify that a single peer has been successfully updated by querying one of the ledgers hosted on the peer. If you want to verify that multiple peers have been upgraded, and are updating your chaincode as part of the upgrade process, you should wait until peers from enough organizations to satisfy the endorsement policy have been upgraded.
Before you attempt this, you may want to upgrade peers from enough organizations to satisfy your endorsement policy. However, this is only mandatory if you are updating your chaincode as part of the upgrade process. If you are not updating your chaincode as part of the upgrade process, it is possible to get endorsements from peers running at different Fabric versions.
Upgrade your CAs
To learn how to upgrade your Fabric CA server, click over to the CA documentation.
Upgrade Node SDK clients
Upgrade Fabric and Fabric CA before upgrading Node SDK clients. Fabric and Fabric CA are tested for backwards compatibility with older SDK clients. While newer SDK clients often work with older Fabric and Fabric CA releases, they may expose features that are not yet available in the older Fabric and Fabric CA releases, and are not tested for full compatibility.
Use NPM to upgrade any Node.js client by executing these commands in the root directory of your application:
$ npm install fabric-client@latest
$ npm install fabric-ca-client@latest
These commands install the new version of both the Fabric client and Fabric-CA client and write the new versions to package.json.
Upgrading CouchDB
If you are using CouchDB as state database, you should upgrade the peer’s CouchDB at the same time the peer is being upgraded.
To upgrade CouchDB:
- Stop CouchDB.
- Backup CouchDB data directory.
- Install the latest CouchDB binaries or update deployment scripts to use a new Docker image.
- Restart CouchDB.
Upgrade Node chaincode shim
To move to the new version of the Node chaincode shim a developer would need to:
- Change the level of fabric-shim in their chaincode package.json from their old level to the new one.
- Repackage this new chaincode package and install it on all the endorsing peers in the channel.
- Perform an upgrade to this new chaincode. To see how to do this, check out Peer chaincode commands.
Upgrade Chaincodes with vendored shim
A number of third party tools exist that will allow you to vendor a chaincode shim. If you used one of these tools, use the same one to update your vendored chaincode shim and re-package your chaincode.
If your chaincode vendors the shim, after updating the shim version, you must install it to all peers which already have the chaincode. Install it with the same name, but a newer version. Then you should execute a chaincode upgrade on each channel where this chaincode has been deployed to move to the new version.
Updating the capability level of a channel
The process for updating capabilities will be discussed now.
Whether you will need to update every part of the configuration for all of your channels will depend on the contents of the latest release as well as your own use case. For more information, check out Upgrading to the latest version of Fabric. Note that it may be necessary to update to the newest capability levels before using the features in the latest release, and it is considered a best practice to always be at the latest binary versions and capability levels.
Because updating the capability level of a channel involves the configuration update transaction process, we will be relying on our Updating a channel configuration topic for many of the commands.
As with any channel configuration update, updating capabilities is, at a high level, a three step process (for each channel):
- Get the latest channel config
- Create a modified channel config
- Create a config update transaction
We will enable these capabilities in the following order:
- Orderer system channel
- Orderer group
- Channel group
- Application channels
- Orderer group
- Channel group
- Application group
While it is possible to edit multiple parts of the configuration of a channel at the same time, in this tutorial we will show how this process is done incrementally. In other words, we will not bundle a change to the Orderer group and the Channel group of the system channel into one configuration change. This is because not every release will have both a new Orderer group capability and a Channel group capability.
Note that in production networks, it will not be possible or desirable for one user to be able to update all of these channels (and parts of configurations) unilaterally. The orderer system channel, for example, is administered exclusively by ordering organization admins (though it is possible to add peer organizations as ordering service organizations). Similarly, updating either the Orderer or Channel groups of a channel configuration requires the signature of an ordering service organization in addition to peer organizations. Distributed systems require collaborative management.
Orderer system channel capabilities
Because application channels copy the configuration of the orderer system channel by default, it is considered a best practice to update the capabilities of the system channel before any application channels. This mirrors the process of updating ordering nodes to the newest version before peers, as described in Upgrading your components.
Note that the orderer system channel is administered by ordering service organizations. By default this will be a single organization (the organization that created the initial nodes in the ordering service), but more organizations can be added here (for example, if multiple organizations have contributed nodes to the ordering service).
Make sure all of the ordering nodes in your ordering service have been upgraded to the required binary level before updating the Orderer and Channel capability. If an ordering node is not at the required level, it will be unable to process the config block with the capability and will crash. Similarly, note that if a new channel is created on this ordering service, all of the peers that will be joined to it must be at least to the node level corresponding to the Channel and Application capabilities, otherwise they will also crash when attempting to process the config block. For more information, check out Capabilities.
Set environment variables
You will need to export the following variables:
- CH_NAME: the name of the system channel being updated.
- CORE_PEER_LOCALMSPID: the MSP ID of the organization proposing the channel update. This will be the MSP of one of the orderer organizations.
- TLS_ROOT_CA: the absolute path to the TLS cert of your ordering node(s).
- CORE_PEER_MSPCONFIGPATH: the absolute path to the MSP representing your organization.
- ORDERER_CONTAINER: the name of an ordering node container. When targeting the ordering service, you can target any particular node in the ordering service. Your requests will be forwarded to the leader automatically.
Orderer group
For the commands on how to pull, translate, and scope the channel config, navigate to Step 1: Pull and translate the config. Once you have a modified_config.json, add the capabilities to the Orderer group of the config (as listed in capabilities.json) using this command:
jq -s '.[0] * {"channel_group":{"groups":{"Orderer": {"values": {"Capabilities": .[1].orderer}}}}}' config.json ./capabilities.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Note that because you are updating the system channel, the mod_policy for the system channel will only require the signature of ordering service organization admins.
Channel group
Once again, navigate to Step 1: Pull and translate the config. Once you have a modified_config.json, add the capabilities to the Channel group of the config (as listed in capabilities.json) using this command:
jq -s '.[0] * {"channel_group":{"values": {"Capabilities": .[1].channel}}}' config.json ./capabilities.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Note that because you are updating the system channel, the mod_policy for the system channel will only require the signature of ordering service organization admins. In an application channel, as you’ll see, you would normally need to satisfy both the MAJORITY Admins policy of both the Application group (consisting of the MSPs of peer organizations) and the Orderer group (consisting of ordering service organizations), assuming you have not changed the default values.
Enable capabilities on existing channels
Now that we have updating the capabilities on the orderer system channel, we need to updating the configuration of any existing application channels you want to update.
As you will see, the configuration of application channels is very similar to that of the system channel. This is what allows us to re-use capabilities.json and the same commands we used for updating the system channel (using different environment variables which we will discuss below).
Make sure all of the ordering nodes in your ordering service and peers on the channel have been upgraded to the required binary level before updating capabilities. If a peer or an ordering node is not at the required level, it will be unable to process the config block with the capability and will crash. For more information, check out Capabilities.
You will need to export the following variables:
- CH_NAME: the name of the application channel being updated. You will have to reset this variable for every channel you update.
- CORE_PEER_LOCALMSPID: the MSP ID of the organization proposing the channel update. This will be the MSP of your peer organization.
- TLS_ROOT_CA: the absolute path to the TLS cert of your peer organization.
- CORE_PEER_MSPCONFIGPATH: the absolute path to the MSP representing your organization.
- ORDERER_CONTAINER: the name of an ordering node container. When targeting the ordering service, you can target any particular node in the ordering service. Your requests will be forwarded to the leader automatically.
Orderer group
Navigate to Step 1: Pull and translate the config. Once you have a modified_config.json, add the capabilities to the Orderer group of the config (as listed in capabilities.json) using this command:
jq -s '.[0] * {"channel_group":{"groups":{"Orderer": {"values": {"Capabilities": .[1].orderer}}}}}' config.json ./capabilities.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Note the mod_policy for this capability defaults to the MAJORITY of the Admins of the Orderer group (in other words, a majority of the admins of the ordering service). Peer organizations can propose an update to this capability, but their signatures will not satisfy the relevant policy in this case.
Channel group
Navigate to Step 1: Pull and translate the config. Once you have a modified_config.json, add the capabilities to the Channel group of the config (as listed in capabilities.json) using this command:
jq -s '.[0] * {"channel_group":{"values": {"Capabilities": .[1].channel}}}' config.json ./capabilities.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Note that the mod_policy for this capability defaults to requiring signatures from both the MAJORITY of Admins in the Application and Orderer groups. In other words, both a majority of the peer organization admins and ordering service organization admins must sign this request.
Application group
Navigate to Step 1: Pull and translate the config. Once you have a modified_config.json, add the capabilities to the Application group of the config (as listed in capabilities.json) using this command:
jq -s '.[0] * {"channel_group":{"groups":{"Application": {"values": {"Capabilities": .[1].application}}}}}' config.json ./capabilities.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Note that the mod_policy for this capability defaults to requiring signatures from the MAJORITY of Admins in the Application group. In other words, a majority of peer organizations will need to approve. Ordering service admins have no say in this capability.
As a result, be very careful to not change this capability to a level that does not exist. Because ordering nodes neither understand nor validate Application capabilities, they will approve a configuration to any level and send the new config block to the peers to be committed to their ledgers. However, the peers will be unable to process the capability and will crash. And even it was possible to drive a corrected configuration change to a valid capability level, the previous config block with the faulty capability would still exist on the ledger and cause peers to crash when trying to process it.
This is one reason why a file like capabilities.json can be useful. It prevents a simple user error — for example, setting the Application capability to V20 when the intent was to set it to V2_0 — that can cause a channel to be unusable and unrecoverable.
Verify a transaction after capabilities have been enabled
It’s a best practice to ensure that capabilities have been enabled successfully with a chaincode invoke on all channels. If any nodes that do not understand new capabilities have not been upgraded to a sufficient binary level, they will crash. You will have to upgrade their binary level before they can be successfully restarted
Enabling the new chaincode lifecycle
Users upgrading from v1.4.x to v2.0 will have to edit their channel configurations to enable the new lifecycle features. This process involves a series of channel configuration updates the relevant users will have to perform.
Note that the Channel and Application capabilities of your application channels will have to be updated to V2_0 for the new chaincode lifecycle to work. Check out Considerations for getting to 2.0 for more information.
Updating a channel configuration is, at a high level, a three step process (for each channel):
- Get the latest channel config
- Create a modified channel config
- Create a config update transaction
We will be performing these channel configuration updates by leveraging a file called enable_lifecycle.json, which contains all of the updates we will be making in the channel configurations. Note that in a production setting it is likely that multiple users would be making these channel update requests. However, for the sake of simplicity, we are presenting all of the updates as how they would appear in a single file.
Edit the channel configurations
System channel updates
Because configuration changes to the system channel to enable the new lifecycle only involve parameters inside the configuration of the peer organizations within the channel configuration, each peer organization being edited will have to sign the relevant channel configuration update.
However, by default, the system channel can only be edited by system channel admins (typically these are admins of the ordering service organizations and not peer organizations), which means that the configuration updates to the peer organizations in the consortium will have to be proposed by a system channel admin and sent to the relevant peer organization to be signed.
You will need to export the following variables:
- CH_NAME: the name of the system channel being updated.
- CORE_PEER_LOCALMSPID: the MSP ID of the organization proposing the channel update. This will be the MSP of one of the ordering service organizations.
- CORE_PEER_MSPCONFIGPATH: the absolute path to the MSP representing your organization.
- TLS_ROOT_CA: the absolute path to the root CA certificate of the organization proposing the system channel update.
- ORDERER_CONTAINER: the name of an ordering node container. When targeting the ordering service, you can target any particular node in the ordering service. Your requests will be forwarded to the leader automatically.
- ORGNAME: the name of the organization you are currently updating.
- CONSORTIUM_NAME: the name of the consortium being updated.
Once you have set the environment variables, navigate to Step 1: Pull and translate the config.
Once you have a modified_config.json, add the lifecycle organization policy (as listed in enable_lifecycle.json) using this command:
jq -s ".[0] * {\"channel_group\":{\"groups\":{\"Consortiums\":{\"groups\": {\"$CONSORTIUM_NAME\": {\"groups\": {\"$ORGNAME\": {\"policies\": .[1].${ORGNAME}Policies}}}}}}}}" config.json ./enable_lifecycle.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
As stated above, these changes will have to be proposed by a system channel admin and sent to the relevant peer organization for signature.
Application channel updates
Edit the peer organizations
We need to perform a similar set of edits to all of the organizations on all application channels.
Note that unlike the system channel, peer organizations are able to make configuration update requests to application channels. If you are making a configuration change to your own organization, you will be able to make these changes without needing the signature of other organizations. However, if you are attempting to make a change to a different organization, that organization will have to approve the change.
You will need to export the following variables:
- CH_NAME: the name of the application channel being updated.
- ORGNAME: The name of the organization you are currently updating.
- TLS_ROOT_CA: the absolute path to the TLS cert of your ordering node.
- CORE_PEER_MSPCONFIGPATH: the absolute path to the MSP representing your organization.
- CORE_PEER_LOCALMSPID: the MSP ID of the organization proposing the channel update. This will be the MSP of one of the peer organizations.
- ORDERER_CONTAINER: the name of an ordering node container. When targeting the ordering service, you can target any particular node in the ordering service. Your requests will be forwarded to the leader automatically.
Once you have set the environment variables, navigate to Step 1: Pull and translate the config.
Once you have a modified_config.json, add the lifecycle organization policy (as listed in enable_lifecycle.json) using this command:
jq -s ".[0] * {\"channel_group\":{\"groups\":{\"Application\": {\"groups\": {\"$ORGNAME\": {\"policies\": .[1].${ORGNAME}Policies}}}}}}" config.json ./enable_lifecycle.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
Edit the application channels
After all of the application channels have been updated to include V2_0 capabilities, endorsement policies for the new chaincode lifecycle must be added to each channel.
You can set the same environment you set when updating the peer organizations. Note that in this case you will not be updating the configuration of an org in the configuration, so the ORGNAME variable will not be used.
Once you have set the environment variables, navigate to Step 1: Pull and translate the config.
Once you have a modified_config.json, add the channel endorsement policy (as listed in enable_lifecycle.json) using this command:
jq -s '.[0] * {"channel_group":{"groups":{"Application": {"policies": .[1].appPolicies}}}}' config.json ./enable_lifecycle.json > modified_config.json
Then, follow the steps at Step 3: Re-encode and submit the config.
For this channel update to be approved, the policy for modifying the Channel/Application section of the configuration must be satisfied. By default, this is a MAJORITY of the peer organizations on the channel.
Enable new lifecycle in core.yaml
If you follow the recommended process for using a tool like diff to compare the new version of core.yaml packaged with the binaries with your old one, you will not need to whitelist _lifecycle: enable because the new core.yaml has added it under chaincode/system.
However, if you are updating your old node YAML file directly, you will have to add _lifecycle: enable to the system chaincodes whitelist.