Lido DAO Aragon omnibus voting scripts.
- This project uses Brownie development framework. Learn more about Brownie.
- Poetry dependency and packaging manager is used
to bootstrap environment and keep the repo sane.
The no-brainer workflow for running scripts & tests from Docker
git clone [email protected]:lidofinance/scripts.git
cd scripts| ENV VAR | RUN TESTS | RUN VOTES |
|---|---|---|
ETH_RPC_URL |
mandatory | mandatory |
ETH_RPC_URL2 |
optional* | - |
PINATA_CLOUD_TOKEN |
- | mandatory |
DEPLOYER |
- | mandatory |
ETHERSCAN_TOKEN |
mandatory | mandatory |
ETHERSCAN_TOKEN2 |
optional* | - |
*may be optionally set when running tests asynchronously to reduce the risk of getting 529 error
Run the container in the scripts directory and specify the ENV VARs:
docker run --name scripts -v "$(pwd)":/root/scripts -e ETH_RPC_URL -e ETH_RPC_URL2 -e ETH_RPC_URL3 -e PINATA_CLOUD_TOKEN -e DEPLOYER -e ETHERSCAN_TOKEN -e ETHERSCAN_TOKEN2 -e ETHERSCAN_TOKEN3 -d ghcr.io/lidofinance/scripts:v19Run:
make docker-initNote: It may take up to 5 minutes for the container to initialize properly the first time.
docker exec -it scripts /bin/bashTo run a Hardhat node inside a deployed Docker container:
npx hardhat node --fork $ETH_RPC_URLDo not forget to add a DEPLOYER account:
poetry run brownie accounts new <id>You now have a fully functional environment to run scripts & tests in, which is linked to your local scripts repo, for example:
poetry run brownie test tests/acceptance/test_accounting_oracle.py -sYou can use the following shortcuts:
make testrun all tests on Hardhat nodemake test vote=XXXrun all tests on Hardhat node with applied Vote #XXXmake test dg=XXrun all tests with executed DG proposal #XXmake test-1/2,make test-2/2run tests divided into 2 parts (can be run asynchronously)make enact-fork vote=scripts/vote_01_01_0001.pydeploy vote and enact it on mainnet forkmake dockerconnect to thescriptsdocker containermake nodestart local mainnet nodemake slotscheck storage slots against local nodemake ci-prepare-environmentprepare environment for CI testsmake init-scriptsinitialize scripts repositorymake init-coreinitialize core repositorymake test-corerun core repository tests
or, to run core repository integrations tests:
NODE_PORT=8547 make node
poetry run brownie test tests/vote_*.py -mfh-3
make test-coreIf your container has been stopped (for example, by a system reboot), start it:
docker start scripts- Push code changes to the repo
- Wait for the approvals
- Add a tag
vXX, whereXXis the next release number, to the commit. You can refer to the Release page - Wait for the workflow build and push image to finish successfully on the tagged commit
- In this README file, update the image version in section Step 3. Run the container
- Python >= 3.10, <3.11
- Pip >= 20.0
- Node >= 16.0
- yarn >= 1.22
Use the following command to install poetry:
pip install --user poetry==1.8.2alternatively, you could proceed with pipx:
pipx install poetry==1.8.2Ensure that poetry bin path is added to your $PATH env variable.
Usually it's $HOME/.local/bin for most Unix-like systems.
To initialize dependencies and lido-core repository for its integration tests run:
make init📝 While previous steps needed only once to init the environment from scratch, the current step is used regularly to activate the environment every time you need it.
poetry shellJust use the Dockerised Hardhat Node or alternatively run it manually:
npx hardhat node --fork $ETH_RPC_URLBy default, you should start composing new scripts and test using forked networks. You have three forked networks to work with:
mainnet-forkholesky-forksepolia-fork
To start new voting on the live networks you could proceed with:
mainnetholeskysepolia
Caution
You can't run tests on the live networks.
In a typical weekly omnibus workflow, you need only mainnet-fork and
mainnet networks. In case of large test campaign on Lido upgrades,
it also could be useful to go with holesky and holesky-fork testnets first.
[!WARNING] > Holesky is partially supported. At the moment not all parameters are set in
configs/config_holesky.pyand acceptance/regression/snapshot tests are not operational.Sepolia is partially supported. At the moment not all parameters are set in
configs/config_sepolia.pyand acceptance/regression/snapshot tests are not operational.
Despite the chosen network you always need to set the following var:
export WEB3_INFURA_PROJECT_ID=<infura_api_key>To start a new vote please provide the DEPLOYER brownie account name (wallet):
export DEPLOYER=<brownie_wallet_name>To run scripts that require decoding of EVM scripts and tests with contract name resolution via Etherscan you should provide the etherscan API token:
export ETHERSCAN_TOKEN=<etherscan_api_key>To upload Markdown vote description for a new vote to IPFS you can use one of those:
- Pinata Cloud API key.
- Infura API key for IPFS.
- Web3 API token web3.storage:
# Pinata Cloud
export PINATA_CLOUD_TOKEN=<pinata_api_key>
# For Infura Web3
export WEB3_INFURA_IPFS_PROJECT_ID=<infura_project_id>
export WEB3_INFURA_IPFS_PROJECT_SECRET=<infura_project_secret>
# For WEB3
export WEB3_STORAGE_TOKEN=<web3_storage_api_key>See here to learn more Markdown description
To skip events decoding while testing set the following var:
export OMNIBUS_BYPASS_EVENTS_DECODING=1To run tests with already started vote provide its id:
export OMNIBUS_VOTE_IDS=156To run tests with already submitted DG proposals provide their ids comma-separated:
export DG_PROPOSAL_IDS=1,2,3To use local ABIs for events decoding use:
export ENV_PARSE_EVENTS_FROM_LOCAL_ABI=1To make default report for acceptance and regression tests after voting execution set:
export REPORT_AFTER_VOTE=1To run test-1/2 at default rpc node use:
export SECONDARY_NETWORK=mfh-1Directory contains state based tests. This tests run every time when tests suite started, if there are any voting scripts or upgrade scripts they will be applied before.
Directory contains scenario tests. This tests run every time when tests suite started, if there are any voting scripts or upgrade scripts they will be applied before.
Directory contains snapshot-scenario tests. This tests run only if there are any upgrade scripts.
Tests for current voting
To run all the test on mainnet-fork execute
brownie testYou can pass network name explicitly with --network {network-name} brownie
command-line argument.
To reveal a full test output pass the -s flag
See here to learn more about tests
- To forcibly bypass etherscan contract and event names decoding set the
OMNIBUS_BYPASS_EVENTS_DECODINGenvironment variable to1. It could be useful in case of etherscan downtimes or usage of some unverified contracts (especially, on the Görli Testnet). - To re-use the already created
vote_idyou can pass theOMNIBUS_VOTE_IDSenvironment variable (e.g.OMNIBUS_VOTE_IDS=104) - To re-use multiple created votes list the ids comma-separated (e.g.
OMNIBUS_VOTE_IDS=104,105) - To re-use the already submitted
proposal_idyou can pass theDG_PROPOSAL_IDSenvironment variable comma-separated (e.g.DG_PROPOSAL_IDS=13,14) - To force the large CI runner usage, please name your branch with the
large-vote_prefix.
Please, use the shared pre-commit hooks to maintain code style:
poetry run pre-commit installPlease move your outdated scripts into archive/scripts and outdated tests into
archive/tests directories.
If you have encountered Invalid hashes errors while trying to run previous command, please remove poetry's cache:
- GNU/Linux
rm -rf ~/.cache/pypoetry/cache/
rm -rf ~/.cache/pypoetry/artifacts/- MAC OS:
rm -rf ~/Library/Caches/pypoetry/cache
rm -rf ~/Library/Caches/pypoetry/artifacts