瀏覽代碼

Storage documentation (#850)

* new structure
* joystream service node
* Storage node guide
* Distributors
* Metadata
* Setup
* Add docker installation
* Update README.md
* add monitoring
* add DP monitoring
* add monitoring geoip
* Create Strorage_WG_Deputy_Leader.json
* Create Strorage_WG_Worker.json
* Create Upload Test procedure.md
* Create GraphQL.md
* Create Budget.md
* Create Commands`
* tools dir
* Create mainnet-questions-for-Leads.md
yasiryagi 2 年之前
父節點
當前提交
1b6a250bb1
共有 40 個文件被更改,包括 3296 次插入127 次删除
  1. 233 0
      working-groups/distributors/NodeSteup/README.md
  2. 64 0
      working-groups/distributors/NodeSteup/Upgrade/README.md
  3. 15 0
      working-groups/distributors/NodeSteup/distributor-node.service
  4. 44 0
      working-groups/distributors/NodeSteup/hosting/Caddyfile
  5. 134 0
      working-groups/distributors/NodeSteup/hosting/README.md
  6. 18 0
      working-groups/distributors/NodeSteup/hosting/caddy.service
  7. 167 0
      working-groups/distributors/NodeSteup/joystream-node/README.md
  8. 20 0
      working-groups/distributors/NodeSteup/joystream-node/joystream-node.service
  9. 66 0
      working-groups/distributors/NodeSteup/monitoring/README.md
  10. 70 0
      working-groups/distributors/NodeSteup/monitoring/config/metricbeat/metricbeat.yml
  11. 37 0
      working-groups/distributors/NodeSteup/monitoring/config/packetbeat/packetbeat.yml
  12. 42 0
      working-groups/distributors/NodeSteup/monitoring/docker-compose.yml
  13. 19 127
      working-groups/distributors/NodeSteup/query-node/README.md
  14. 197 0
      working-groups/storage-group/NodeSteup/README.md
  15. 55 0
      working-groups/storage-group/NodeSteup/Upgrade/README.md
  16. 46 0
      working-groups/storage-group/NodeSteup/hosting/Caddyfile
  17. 137 0
      working-groups/storage-group/NodeSteup/hosting/README.md
  18. 18 0
      working-groups/storage-group/NodeSteup/hosting/caddy.service
  19. 167 0
      working-groups/storage-group/NodeSteup/joystream-node/README.md
  20. 20 0
      working-groups/storage-group/NodeSteup/joystream-node/joystream-node.service
  21. 51 0
      working-groups/storage-group/NodeSteup/monitoring/README.md
  22. 70 0
      working-groups/storage-group/NodeSteup/monitoring/config/metricbeat/metricbeat.yml
  23. 37 0
      working-groups/storage-group/NodeSteup/monitoring/config/packetbeat/packetbeat.yml
  24. 42 0
      working-groups/storage-group/NodeSteup/monitoring/docker-compose.yml
  25. 158 0
      working-groups/storage-group/NodeSteup/query-node/README.md
  26. 23 0
      working-groups/storage-group/NodeSteup/storage-node.service
  27. 74 0
      working-groups/storage-group/SOP/README.md
  28. 27 0
      working-groups/storage-group/leader/Budget.md
  29. 58 0
      working-groups/storage-group/leader/Commands.md
  30. 99 0
      working-groups/storage-group/leader/GraphQL.md
  31. 40 0
      working-groups/storage-group/leader/Initial_ setup_commands.md
  32. 49 0
      working-groups/storage-group/leader/README.md
  33. 27 0
      working-groups/storage-group/leader/Upload Test procedure.md
  34. 59 0
      working-groups/storage-group/leader/mainnet/mainnet-questions-for-Leads.md
  35. 61 0
      working-groups/storage-group/leader/opening/Strorage_WG_Deputy_Leader.json
  36. 67 0
      working-groups/storage-group/leader/opening/Strorage_WG_Leader.json.json
  37. 60 0
      working-groups/storage-group/leader/opening/Strorage_WG_Worker.json
  38. 11 0
      working-groups/storage-group/leader/tools/README.md
  39. 97 0
      working-groups/storage-group/leader/tools/print_bags.py
  40. 617 0
      working-groups/storage-group/leader/tools/report.py

+ 233 - 0
working-groups/distributors/NodeSteup/README.md

@@ -0,0 +1,233 @@
+
+# Instructions
+
+The instructions below will assume you are running as `root`. This makes the instructions somewhat easier, but less safe and robust.
+
+Note that this has been tested on a fresh images of `Ubuntu 20.04 LTS`.
+
+
+## Upgrade 
+
+To upgrade the node please  [go here for the upgrade guide](./Upgrade/README.md)
+ 
+
+## Initial setup
+
+```
+$ apt-get update && apt-get upgrade -y
+$ apt install vim git curl -y
+```
+### Setup hosting
+[Go here for the installation guide](./hosting/README.md)
+### Setup joystream-node
+[Go here for the installation guide](./joystream-node/README.md)
+### Setup Query Node
+[Go here for the installation guide](./query-node/README.md)
+
+
+
+## Install and Setup the Distributor Node
+> If you have done this on the query node setup, you can skip this section.
+
+```
+$ git clone https://github.com/Joystream/joystream.git
+$ cd joystream
+$ ./setup.sh
+# this requires you to start a new session. if you are using a vps:
+$ exit
+$ ssh user@ipOrURL
+$ cd joystream
+$ ./build-packages.sh
+$ yarn joystream-distributor --help
+```
+
+### Applying for a Distributor opening
+
+Click [here](https://testnet.joystream.org) to open the `Pioneer app` in your browser. Then follow instructions [here](https://github.com/Joystream/helpdesk#get-started) to generate a set of `Keys`, get tokens, and sign up for a `Membership`. This `key` will be referred to as the `member` key.
+
+Make sure to save the `5YourJoyMemberAddress.json` file. This key will require tokens to be used as stake for the `Distributor Provider` application (`application stake`) and further stake may be required if you are selected for the role (`role stake`).
+
+To check for current openings, visit [this page](https://testnet.joystream.org/#/working-groups/opportunities) on Pioneer and look for any `Distributor Provider` applications which are open for applications. If there is an opening available, fill in the details requested in the form required and stake the tokens needed to apply (when prompted you can sign a transaction for this purpose).
+
+During this process you will be provided with a role key, which will be made available to download in the format `5YourDistributorRoleKey.json`. If you set a password for this key, remember it :)
+
+The next steps (below) will only apply if you are a successful applicant.
+
+
+### Setup and Configure the Distributor Node
+
+On the machine/VPS you want to run your distributor node:
+
+```
+$ mkdir ~/keys/
+```
+
+Assuming you are running the distributor node on a VPS via ssh, on your local machine:
+```
+# Go the directory where you saved your <5YourDistributorRoleKey.json>, then rename it to
+
+distributor-role-key.json
+
+$ scp distributor-role-key.json <user>@<your.vps.ip.address>:/root/keys/
+```
+
+**Make sure your [Joystream full node](#Setup-joystream-node) and [Query Node](#Setup-Query-Node) is fully synced before you move to the next step(s)!**
+
+### Config File
+The default `config.yml` file can be found below. Note that you only need to modify a few lines.
+May be should condifer changing: id, maxFiles and maxSize.
+```
+nano ~/joystream/distributor-node/config.yml
+---
+
+id: <your node name>
+endpoints:
+  queryNode: http://localhost:8081/graphql
+  joystreamNodeWs: ws://localhost:9944
+directories:
+  assets: ./local/data
+  cacheState: ./local/cache
+logs:
+  file:
+    level: debug
+    path: ./local/logs
+    maxFiles: 30 # 30 days or 30 * 50 MB
+    maxSize: 50485760 # 50 MB
+  console:
+    level: verbose
+  # elastic:
+  #   level: info
+  #   endpoint: http://localhost:9200
+limits:
+  storage: 100G
+  maxConcurrentStorageNodeDownloads: 100
+  maxConcurrentOutboundConnections: 300
+  outboundRequestsTimeoutMs: 5000
+  pendingDownloadTimeoutSec: 3600
+  maxCachedItemSize: 1G
+intervals:
+  saveCacheState: 60
+  checkStorageNodeResponseTimes: 60
+  cacheCleanup: 60
+publicApi:
+  port: 3334
+operatorApi:
+  port: 3335
+  hmacSecret: this-is-not-so-secret
+keys:
+  - suri: //Alice
+  # - mnemonic: "escape naive annual throw tragic achieve grunt verify cram note harvest problem"
+  #   type: ed25519
+  # - keyfile: "/path/to/distributor-role-key.json"
+workerId: 0
+```
+The following lines must be changed:
+```
+# Comment out:
+  - suri: //Alice
+# uncomment and edit
+  - keyfile: "/root/keys/keyfile.json"
+
+# replace 0 with your <workerId>
+workerId: 0
+
+# replace with a real secret
+hmacSecret: this-is-not-so-secret
+```
+
+- `endpoints:` If you are not running your own node and/or query node:
+  - change both to working endpoints
+- `limits:` These numbers should depend on decisions made by the Lead
+- `hmacSecret: this-is-not-so-secret`
+- `directories:` You may want to change these. Especially if you are renting extra storage volumes
+  - `/path/to/storage/volume`
+
+### Accept Invitation
+Once hired, the Distributor Lead will invite you a to "bucket". Before this is done, you will not be able to participate. Assuming:
+- your Worker ID is `<workerId>`
+- the Lead has invited to bucket family `<bucketFamilyId>` with index `<bucketId>` -> `<bucketFamilyId>:<bucketId>`
+
+```
+$ cd ~/joystream/distributor-node
+$ yarn joystream-distributor operator:accept-invitation -B <bucketFamilyId>:<bucketId> -w <workerId>
+```
+
+### Set Metadata
+When you have accepted the invitation, you have to set metadata for your node. If your VPS is in Frankfurt, Germany:
+
+```
+$ nano ~/joystream/distributor-node/metadata.json
+# Modify, and paste in everything below the stapled line
+---
+{
+  "endpoint": "https://<your.cool.url>/distributor/",
+  "location": {
+    "countryCode": "DE",
+    "city": "Frankfurt",
+    "coordinates": {
+      "latitude": 52,
+      "longitude": 15
+    }
+  },
+  "extra": "<Node ID>: <Location>, Xcores, <RAM>G, <SSD>G "
+}
+```
+Where:
+- The location should really be correct, [IPLocation](https://www.iplocation.net/)
+- extra is not that critical. It could perhaps be nice to add some info on your max capacity?
+
+Then set it on-chain with:
+
+```
+$ cd ~/joystream/distributor-node
+$ yarn joystream-distributor operator:set-metadata -B <bucketFamilyId>:<bucketId> -w <workerId> -i /path/to/metadata.json
+```
+
+## Deploy the Distributor Node
+First, create a `systemd` file. Example file below:
+
+```
+$ nano /etc/systemd/system/distributor-node.service
+
+# Modify, and paste in everything below the stapled line
+---
+[Unit]
+Description=Joystream Distributor Node
+After=network.target joystream-node.service
+
+[Service]
+User=root
+WorkingDirectory=/root/joystream/
+LimitNOFILE=10000
+ExecStart=/root/.volta/bin/yarn joystream-distributor start \
+        -c /root/joystream/distributor-node/config.yml
+Restart=on-failure
+StartLimitInterval=600
+
+[Install]
+WantedBy=multi-user.target
+```
+
+To start the node:
+
+```
+$ systemctl start distributor-node
+# If everything works, you should get an output. Verify with:
+$ journalctl -f -n 200 -u distributor-node
+
+# If it looks ok, it probably is :)
+---
+
+# To have the distributor-node start automatically at reboot:
+$ systemctl enable distributor-node
+# If you want to stop the distributor node, either to edit the distributor-node.service file or some other reason:
+$ systemctl stop distributor-node
+```
+
+### Verify everything is working
+
+In your browser, try:
+`https://<your.cool.url>/distributor/api/v1/status`.
+
+# Troubleshooting
+If you had any issues setting it up, you may find your answer here!

+ 64 - 0
working-groups/distributors/NodeSteup/Upgrade/README.md

@@ -0,0 +1,64 @@
+# Upgrade 
+## Go to the Joystream root directory
+```
+cd joystream
+```
+## Back up your config files 
+```
+cp .env /someBackupLocation  //just to save old params
+cp <root to folder>/distributor-node/config.yml /someBackupLocation
+cp <root to folder>/distributor-node/metadata.json /someBackupLocation
+```
+## Stop the distribution service 
+```
+systemctl stop distributor-node.service
+```
+## Stop the query node
+In **./query-node/kill.sh** might want to change the below to keep the database
+```
+docker-compose -f ../docker-compose.yml rm -vsf db
+to
+docker-compose -f ../docker-compose.yml rm -sf db
+```
+
+Now kill the containers
+```
+./query-node/kill.sh
+```
+## Get the lastest and greatest repo
+```
+git stash
+git pull
+```
+
+## apply .env sh - you can use values from old backup file
+
+## Run the setup script
+```
+ ./setup.sh
+```
+## logout here and login back 
+
+## Build
+
+```
+./build-packages.sh 
+```
+## Start the services
+```
+query-node/start.sh
+systemctl start distributor-node.service
+```
+
+## Verify
+### Verify that indexer
+```
+docker ps
+docker logs -f -n 100 indexer
+docker logs -f -n 100 processor
+```
+
+### Verify distribution 
+```
+https://<your.cool.url>/distributor/api/v1/status
+```

+ 15 - 0
working-groups/distributors/NodeSteup/distributor-node.service

@@ -0,0 +1,15 @@
+[Unit]
+Description=Joystream Distributor Node
+After=network.target joystream-node.service
+
+[Service]
+User=root
+WorkingDirectory=/root/joystream/
+LimitNOFILE=10000
+ExecStart=/root/.volta/bin/yarn joystream-distributor start \
+        -c /root/joystream/distributor-node/config.yml
+Restart=on-failure
+StartLimitInterval=600
+
+[Install]
+WantedBy=multi-user.target

+ 44 - 0
working-groups/distributors/NodeSteup/hosting/Caddyfile

@@ -0,0 +1,44 @@
+# Joystream-node
+wss://<your.cool.url>/rpc {
+        reverse_proxy localhost:9944
+}
+
+
+# Query-node
+https://<your.cool.url> {
+        log {
+                output stdout
+        }
+        route /server/* {
+                uri strip_prefix /server
+                reverse_proxy localhost:8081
+        }
+        route /graphql {
+                reverse_proxy localhost:8081
+        }
+        route /graphql/* {
+                reverse_proxy localhost:8081
+        }
+        route /gateway/* {
+                uri strip_prefix /gateway
+                reverse_proxy localhost:4000
+        }
+        route /@apollographql/* {
+                reverse_proxy localhost:8081
+        }
+}
+
+# Distributor Node
+https://<your.cool.url>/distributor/* {
+        log {
+                output stdout
+        }
+        route /distributor/* {
+                uri strip_prefix /distributor
+                reverse_proxy localhost:3334
+        }
+        header /distributor {
+                Access-Control-Allow-Methods "GET, PUT, HEAD, OPTIONS, POST"
+                Access-Control-Allow-Headers "GET, PUT, HEAD, OPTIONS, POST"
+        }
+}

+ 134 - 0
working-groups/distributors/NodeSteup/hosting/README.md

@@ -0,0 +1,134 @@
+
+In order to allow for users to upload and download, you have to setup hosting, with an actual domain as both Chrome and Firefox requires `https://`. If you have a "spare" domain or subdomain you don't mind using for this purpose, go to your domain registrar and point your domain to the IP you want. If you don't, you will need to purchase one.
+
+# Caddy
+To configure SSL-certificates the easiest option is to use [caddy](https://caddyserver.com/), but feel free to take a different approach. Note that if you are using caddy for commercial use, you need to acquire a license. Please check their terms and make sure you comply with what is considered personal use.
+
+For the best setup, you should use the "official" [documentation](https://caddyserver.com/docs/).
+
+The instructions below are for Caddy v2.4.1:
+```
+$ wget https://github.com/caddyserver/caddy/releases/download/v2.4.6/caddy_2.4.6_linux_amd64.tar.gz
+$ tar -vxf caddy_2.4.6_linux_amd64.tar.gz
+$ mv caddy /usr/bin/
+# Test that it's working:
+$ caddy version
+```
+
+# Configure the `Caddyfile`:
+```
+$ nano ~/Caddyfile
+# Modify, and paste in everything below the stapled line
+---
+# Joystream-node
+wss://<your.cool.url>/rpc {
+        reverse_proxy localhost:9944
+}
+
+
+# Query-node
+https://<your.cool.url> {
+        log {
+                output stdout
+        }
+        route /server/* {
+                uri strip_prefix /server
+                reverse_proxy localhost:8081
+        }
+        route /graphql {
+                reverse_proxy localhost:8081
+        }
+        route /graphql/* {
+                reverse_proxy localhost:8081
+        }
+        route /gateway/* {
+                uri strip_prefix /gateway
+                reverse_proxy localhost:4000
+        }
+        route /@apollographql/* {
+                reverse_proxy localhost:8081
+        }
+}
+
+# Distributor Node
+https://<your.cool.url>/distributor/* {
+        log {
+                output stdout
+        }
+        route /distributor/* {
+                uri strip_prefix /distributor
+                reverse_proxy localhost:3334
+        }
+        header /distributor {
+                Access-Control-Allow-Methods "GET, PUT, HEAD, OPTIONS, POST"
+                Access-Control-Allow-Headers "GET, PUT, HEAD, OPTIONS, POST"
+        }
+}
+```
+# Check
+Now you can check if you configured correctly, with:
+```
+$ caddy validate ~/Caddyfile
+# Which should return:
+--
+...
+Valid configuration
+--
+# You can now run caddy with:
+$ caddy run --config /root/Caddyfile
+# Which should return something like:
+--
+...
+... [INFO] [<your.cool.url>] The server validated our request
+... [INFO] [<your.cool.url>] acme: Validations succeeded; requesting certificates
+... [INFO] [<your.cool.url>] Server responded with a certificate.
+... [INFO][<your.cool.url>] Certificate obtained successfully
+... [INFO][<your.cool.url>] Obtain: Releasing lock
+```
+
+# Run caddy as a service
+To ensure high uptime, it's best to set the system up as a `service`.
+
+Example file below:
+
+```
+$ nano /etc/systemd/system/caddy.service
+
+# Modify, and paste in everything below the stapled line
+---
+[Unit]
+Description=Caddy
+Documentation=https://caddyserver.com/docs/
+After=network.target
+
+[Service]
+User=root
+ExecStart=/usr/bin/caddy run --config /root/Caddyfile
+ExecReload=/usr/bin/caddy reload --config /root/Caddyfile
+TimeoutStopSec=5s
+LimitNOFILE=1048576
+LimitNPROC=512
+PrivateTmp=true
+ProtectSystem=full
+AmbientCapabilities=CAP_NET_BIND_SERVICE
+
+[Install]
+WantedBy=multi-user.target
+```
+Save and exit. Close `caddy` if it's still running, then:
+```
+$ systemctl start caddy
+# If everything works, you should get an output. Verify with:
+$ systemctl status caddy
+# Which should produce something similar to the previous output.
+# To have caddy start automatically at reboot:
+$ systemctl enable caddy
+# If you want to stop caddy:
+$ systemctl stop caddy
+# If you want to edit your Caddfile, edit it, then run:
+$ caddy reload
+```
+
+
+
+```

+ 18 - 0
working-groups/distributors/NodeSteup/hosting/caddy.service

@@ -0,0 +1,18 @@
+[Unit]
+Description=Caddy
+Documentation=https://caddyserver.com/docs/
+After=network.target
+
+[Service]
+User=root
+ExecStart=/usr/bin/caddy run --config /root/Caddyfile
+ExecReload=/usr/bin/caddy reload --config /root/Caddyfile
+TimeoutStopSec=5s
+LimitNOFILE=1048576
+LimitNPROC=512
+PrivateTmp=true
+ProtectSystem=full
+AmbientCapabilities=CAP_NET_BIND_SERVICE
+
+[Install]
+WantedBy=multi-user.target

+ 167 - 0
working-groups/distributors/NodeSteup/joystream-node/README.md

@@ -0,0 +1,167 @@
+# Release
+
+Find the lasted release [here](https://github.com/Joystream/joystream/releases)
+
+
+# Setup 
+
+## Run Node
+
+```
+$ cd ~/
+$ mkdir joystream-node
+$ cd joystream-node
+# 64 bit debian based Linux
+$ wget https://github.com/Joystream/joystream/releases/download/v10.7.1/joystream-node-6.7.0-bdec855-x86_64-linux-gnu.tar.gz
+$ tar -vxf joystream-node-6.7.0-bdec855-x86_64-linux-gnu.tar.gz
+$ mv joystream-node /usr/local/bin/
+$ wget https://github.com/Joystream/joystream/releases/download/v10.5.0/joy-testnet-6.json
+# Test is it working. 
+$ joystream-node --chain joy-testnet-6.json --pruning archive --validator
+```
+- If you want your node to have a non-random identifier, add the flag:
+  - `--name <nodename>`
+- If you want to get a more verbose log output, add the flag:
+  - `--log runtime,txpool,transaction-pool,trace=sync`
+
+Your node should now start syncing with the blockchain. The output should look like this:
+```
+Joystream Node
+  version "Version"-"your_OS"
+  by Joystream contributors, 2019-2020
+Chain specification: "Joystream Version"
+Node name: "nodename"
+Roles: AUTHORITY
+Initializing Genesis block/state (state: "0x…", header-hash: "0x…")
+Loading GRANDPA authority set from genesis on what appears to be first startup.
+Loaded block-time = BabeConfiguration { slot_duration: 6000, epoch_length: 100, c: (1, 4), genesis_authorities: ...
+Creating empty BABE epoch changes on what appears to be first startup.
+Highest known block at #0
+Local node identity is: "peer id"
+Starting BABE Authorship worker
+Discovered new external address for our node: /ip4/"IP"/tcp/30333/p2p/"peer id"
+New epoch 0 launching at block ...
+...
+...
+Syncing, target=#"block_height" ("n" peers), best: #"synced_height" ("hash_of_synced_tip"), finalized #0 ("hash_of_finalized_tip"), ⬇ "download_speed"kiB/s ⬆ "upload_speed"kiB/s
+```
+From the last line, notice `target=#"block_height"` and `best: #"synced_height"`
+When the `target=#block_height`is the same as `best: #"synced_height"`, your node is fully synced!
+
+**Keep the terminal window open.** or recommended to [Run as a service](#run-as-a-service)
+
+
+## Configure the service
+
+Either as root, or a user with sudo privileges. If the latter, add `sudo` before commands.
+
+```
+$ cd /etc/systemd/system
+# you can choose whatever name you like, but the name has to end with .service
+$ touch joystream-node.service
+# open the file with your favorite editor (I use nano below)
+$ nano joystream-node.service
+```
+
+#### Example with user joystream
+
+The example below assumes the following:
+- You have setup a user `joystream` to run the node
+
+```
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=joystream
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target
+```
+
+#### Example as root
+
+The example below assumes the following:
+- You have setup a user `root` to run the node
+
+```
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=root
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Starting the service
+
+You can add/remove any `flags` as long as you remember to include `\` for every line but the last. Also note that systemd is very sensitive to syntax, so make sure there are no extra spaces before or after the `\`.
+
+After you are happy with your configuration:
+
+```
+$ systemctl daemon-reload
+# this is only strictly necessary after you changed the .service file after running, but chances are you will need to use it once or twice.
+# if your node is still running, now is the time to kill it.
+$ systemctl start joystream-node
+# if everything is correctly configured, this command will not return anything.
+# To verify it's running:
+$ systemctl status joystream-node
+# this will only show the last few lines. To see the latest 100 entries (and follow as new are added)
+$ journalctl -n 100 -f -u joystream-node
+
+# To make the service start automatically at boot:
+$ systemctl enable joystream-node
+```
+You can restart the service with:
+- `systemctl restart joystream-node`
+
+If you want to change something (or just stop), run:
+- `systemctl stop joystream-node`
+
+Before you make the changes. After changing:
+
+```
+$ systemctl daemon-reload
+$ systemctl start joystream-node
+```
+
+### Errors
+
+If you make a mistake somewhere, `systemctl start joystream-node` will prompt:
+```
+Failed to start joystream-node.service: Unit joystream-node.service is not loaded properly: Invalid argument.
+See system logs and 'systemctl status joystream-node.service' for details.
+```
+Follow the instructions, and see if anything looks wrong. Correct it, then:
+
+```
+$ systemctl daemon-reload
+$ systemctl start joystream-node
+```

+ 20 - 0
working-groups/distributors/NodeSteup/joystream-node/joystream-node.service

@@ -0,0 +1,20 @@
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=root
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target

+ 66 - 0
working-groups/distributors/NodeSteup/monitoring/README.md

@@ -0,0 +1,66 @@
+# Configure Joystream Node
+
+
+
+## Distributor Node
+
+In joystream/distributor-node/config.yml configure the below section
+```
+logs:
+  elastic:
+    level: info
+    endpoint: https://<elasticsearch.your.cool.url>
+```
+
+## Storage Node
+
+In /etc/systemd/system/storage-node.service  add the -e flag
+```
+ExecStart=/root/.volta/bin/yarn storage-node server \
+        -u ws://localhost:9944 \
+        -w <workerId> \
+        -o 3333 \
+        -l /<root/joystream-storage>/log/ \
+        -d /<root/joystream-storage> \
+        -q http://localhost:8081/graphql \
+        -p <Passowrd> \
+        -k /root/keys/storage-role-key.json \
+        -e https://<elasticsearch.your.cool.url> \
+        -s
+```
+## Configure Packetbeat and Metricbeat
+
+```
+git clone https://github.com/yasiryagi/elasticsearch-docker.git
+cd elasticsearch-docker/client/
+```
+
+Edit config/packetbeat/packetbeat.yml:
+* name:  Node Name 
+* tags:
+  - SP : Storage Provider
+  - DP : Distributor Provicder
+  - PB : Packetbeat
+  - MB : Metricsbeat
+* packetbeat.interfaces.device: Your device interface
+* hosts : The elasticsearch host
+* username: Username provided by the admin
+* password: Username provided by the admin
+
+
+Edit config/metricbeat/metricbeat.yml:
+* name:  Node Name
+* tags:
+  - SP : Storage Provider
+  - DP : Distributor Provicder
+  - PB : Packetbeat
+  - MB : Metricsbeat
+* hosts : The elasticsearch host
+* username: Username provided by the admin
+* password: Username provided by the admin
+
+## Start the containers
+
+```
+docker-compose up -d
+```

+ 70 - 0
working-groups/distributors/NodeSteup/monitoring/config/metricbeat/metricbeat.yml

@@ -0,0 +1,70 @@
+name: "YYAGI2_SP"
+tags: ["SP", "YYAGI", "MB"]
+
+metricbeat.config.modules:
+  path: ${path.config}/modules.d/*.yml
+  reload.enabled: false
+
+metricbeat.autodiscover:
+  providers:
+    - type: docker
+      hints.enabled: true
+
+metricbeat.modules:
+- module: system
+  metricsets:
+    - "cpu"
+    - "load"
+    - "filesystem"
+    - "fsstat"
+    - "memory"
+    - "network"
+    - "process"
+    - "core"
+    - "diskio"
+    - "socket"
+  period: 5s
+  enabled: true
+  processes: ['.*']
+  cpu.metrics:  ["percentages"]
+  core.metrics: ["percentages"]
+  process.cgroups.enabled: false
+
+#metricbeat.modules:
+#- module: docker
+#  metricsets:
+#    - "container"
+#    - "cpu"
+#    - "diskio"
+#    - "healthcheck"
+#    - "info"
+#    #- "image"
+#    - "memory"
+#    - "network"
+#  hosts: ["unix:///var/run/docker.sock"]
+#  period: 10s
+#  enabled: true
+ 
+hostfs: "/hostfs"
+
+output.elasticsearch:
+  hosts: ["https://elastic.joystreamstats.live:443"]
+  protocol: "https"
+  pipeline: geoip-info
+  #api_key: "MHJ6bVhvRUJybmlHM2NJSC16cmw6OE1CMmdpeXFSaTZjR1B0M3cxeFBfQQ=="
+  username: "beats_admin"
+  password: "*****"
+  # username: beats_system
+  # Read PW from packetbeat.keystore
+  # password: "*****"
+  # ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+
+#setup.kibana:
+  #host: "https://kibana.yyagi.cloud"
+  #username: "beats_admin"
+  #password: "L3tM31n!"
+#   protocol: "https"
+#   ssl.enabled: false
+#   ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+# 
+xpack.monitoring.enabled: true

+ 37 - 0
working-groups/distributors/NodeSteup/monitoring/config/packetbeat/packetbeat.yml

@@ -0,0 +1,37 @@
+
+name: "YYAGI2_SP"
+tags: ["SP", "YYAGI", "PB"]
+
+packetbeat.interfaces.device: enp41s0
+
+packetbeat.flows:
+  timeout: 30s
+  period: 10s
+
+packetbeat.protocols:
+- type: dns
+  ports: [53]
+
+- type: http
+  ports: [80, 8080, 8000, 5000, 8002]
+
+- type: tls
+  ports: [443, 993, 995, 5223, 8443, 8883, 9243]
+        
+
+output.elasticsearch:
+  hosts: ["https://elastic.joystreamstats.live:443"]
+  protocol: "https"
+  pipeline: geoip-info
+  #api_key: "MHJ6bVhvRUJybmlHM2NJSC16cmw6OE1CMmdpeXFSaTZjR1B0M3cxeFBfQQ=="
+  username: "beats_admin"
+  password: "*********"
+#setup.kibana:
+  #host: "https://kibana.yyagi.cloud"
+  #username: "beats_admin"
+  #password: "L3tM31n!"
+#   protocol: "https"
+#   ssl.enabled: false
+#   ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+
+xpack.monitoring.enabled: true

+ 42 - 0
working-groups/distributors/NodeSteup/monitoring/docker-compose.yml

@@ -0,0 +1,42 @@
+version: '3.6'
+services:
+  packetbeat:
+    image: docker.elastic.co/beats/packetbeat:8.2.3
+    container_name: packetbeat
+    cap_add: ['NET_RAW', 'NET_ADMIN']
+    network_mode: host
+    user: root
+    command: -e -E 'output.elasticsearch.hosts=["https://elastic.joystreamstats.live:443"]'
+    command: --strict.perms=false -e -E output.elasticsearch.hosts="https://elastic.joystreamstats.live:443" # -e flag to log to stderr and disable syslog/file output
+    secrets:
+      - source: packetbeat.yml
+        target: /usr/share/packetbeat/packetbeat.yml
+    healthcheck:
+      test: packetbeat test config
+      interval: 30s
+      timeout: 15s
+      retries: 5
+  metricbeat:
+    image: docker.elastic.co/beats/metricbeat:8.2.3
+    container_name: metricbeat
+    network_mode: host
+    user: root
+    command: --strict.perms=false -system.hostfs=/hostfs -e -E output.elasticsearch.hosts="https://elastic.joystreamstats.live:443" # -e flag to log to stderr and disable syslog/file output
+    volumes:
+      - /proc:/hostfs/proc:ro
+      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
+      - /:/hostfs:ro
+      - /var/run/docker.sock:/var/run/docker.sock:ro
+    secrets:
+      - source: metricbeat.yml
+        target: /usr/share/metricbeat/metricbeat.yml
+    healthcheck:
+      test: metricbeat test config
+      interval: 30s
+      timeout: 15s
+      retries: 5
+secrets:  
+  packetbeat.yml:
+    file: ./config/packetbeat/packetbeat.yml
+  metricbeat.yml:
+    file: ./config/metricbeat/metricbeat.yml

+ 19 - 127
working-groups/distributors/query-node/README.md → working-groups/distributors/NodeSteup/query-node/README.md

@@ -1,29 +1,30 @@
-Table of Contents
-==
-<!-- TOC START min:1 max:3 link:true asterisk:false update:true -->
-- [Overview](#overview)
-  - [Get Started](#get-started)
-    - [Clone the Repo](#clone-the-repo)
-    - [Install a Newer Version of `docker-compose`](#install-a-newer-version-of-docker-compose)
-    - [Deploy](#deploy)
-    - [Confirm Everything is Working](#confirm-everything-is-working)
-  - [Setup Hosting](#setup-hosting)
-    - [Caddy](#caddy)
-    - [Run caddy as a service](#run-caddy-as-a-service)
-  - [Troubleshooting](#troubleshooting)
-<!-- TOC END -->
 
 # Overview
-This guide will help you deploy a working query-node.
 
 The following assumptions apply:
 1. You are `root`, and [cloning](#clone-the-repo) to `~/joystream`
 2. in most cases, you will want to run your own `joystream-node` on the same device, and this guide assumes you are.
 
-For instructions on how to set this up, go [here](/roles/validators). Note that you can disregard all the parts about keys before applying, and just install the software so it is ready to go. You do need to run with `--pruning=archive` though, and be synced past the blockheight you are exporting the db from.
+For instructions on how to set this up, go [here](../joystream-node/README.md). Note that you can disregard all the parts about keys before applying, and just install the software so it is ready to go. You do need to run with `--pruning=archive` though, and be synced past the blockheight you are exporting the db from.
 
 ## Get Started
-You don't need to host your query-node, but if you're connecting to your own node, docker will not "find" it on localhost. So first, go to [Setup Hosting](#setup-hosting).
+You don't need to host your query-node, but if you're connecting to your own node, docker will not "find" it on localhost. So first, go to [Setup Hosting](../hosting/README.md).
+
+### Install Docker
+```
+sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+sudo echo \
+  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
+  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+sudo apt-get update
+sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
+```
+
+### Install Docker-Compose
+```
+sudo curl -L https://github.com/docker/compose/releases/download/v2.5.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
+sudo chmod +x /usr/local/bin/docker-compose
+```
 
 ### Clone the Repo
 If you haven't already, clone the `Joystream/joystream` (mono)repo:
@@ -156,115 +157,6 @@ curl 'localhost:8081/graphql' -H 'Accept-Encoding: gzip, deflate, br' -H 'Conten
 Finally, if you included hosting of the `Query-node`, you can access the graphql server at `https://<your.cool.url>/server/graphql`.
 Note that you'd need to change `https://<your.cool.url>/graphql` address to `https://<your.cool.url>/server/graphql` as well for the server to be reached.
 
-## Setup Hosting
-In order to allow for users to upload and download, you have to setup hosting, with an actual domain as both Chrome and Firefox requires `https://`. If you have a "spare" domain or subdomain you don't mind using for this purpose, go to your domain registrar and point your domain to the IP you want. If you don't, you will need to purchase one.
-
-### Caddy
-To configure SSL-certificates the easiest option is to use [caddy](https://caddyserver.com/), but feel free to take a different approach. Note that if you are using caddy for commercial use, you need to acquire a license. Please check their terms and make sure you comply with what is considered personal use.
-
-For the best setup, you should use the "official" [documentation](https://caddyserver.com/docs/).
-
-The instructions below are for Caddy v2.4.6:
-```
-$ wget https://github.com/caddyserver/caddy/releases/download/v2.4.6/caddy_2.4.6_linux_amd64.tar.gz
-$ tar -vxf caddy_2.4.6_linux_amd64.tar.gz
-$ mv caddy /usr/bin/
-# Test that it's working:
-$ caddy version
-```
-
-Configure the `Caddyfile`:
-**Note** you only "need" the `Joystream-node`, whereas the `Query-node` will have you host the a (public) graphql server.
-```
-$ nano ~/Caddyfile
-# Modify, and paste in everything below the stapled line
----
-# Joystream-node
-wss://<your.cool.url>/rpc {
-	reverse_proxy localhost:9944
-}
-
-# Query-node
-https://<your.cool.url>{
-	log {
-		output stdout
-	}
-	route /server/* {
-		uri strip_prefix /server
-		reverse_proxy localhost:8081
-	}
-	route /gateway/* {
-		uri strip_prefix /gateway
-		reverse_proxy localhost:4000
-	}
-	route /@apollographql/* {
-		reverse_proxy localhost:8081
-	}
-}
-```
-
-Now you can check if you configured correctly, with:
-```
-$ caddy validate ~/Caddyfile
-# Which should return:
---
-...
-Valid configuration
---
-# You can now run caddy with:
-$ caddy run --config /root/Caddyfile
-# Which should return something like:
---
-...
-... [INFO] [<your.cool.url>] The server validated our request
-... [INFO] [<your.cool.url>] acme: Validations succeeded; requesting certificates
-... [INFO] [<your.cool.url>] Server responded with a certificate.
-... [INFO][<your.cool.url>] Certificate obtained successfully
-... [INFO][<your.cool.url>] Obtain: Releasing lock
-```
-
-### Run caddy as a service
-To ensure high uptime, it's best to set the system up as a `service`.
-
-Example file below:
-
-```
-$ nano /etc/systemd/system/caddy.service
-
-# Modify, and paste in everything below the stapled line
----
-[Unit]
-Description=Caddy
-Documentation=https://caddyserver.com/docs/
-After=network.target
-
-[Service]
-User=root
-ExecStart=/usr/bin/caddy run --config /root/Caddyfile
-ExecReload=/usr/bin/caddy reload --config /root/Caddyfile
-TimeoutStopSec=5s
-LimitNOFILE=1048576
-LimitNPROC=512
-PrivateTmp=true
-ProtectSystem=full
-AmbientCapabilities=CAP_NET_BIND_SERVICE
-
-[Install]
-WantedBy=multi-user.target
-```
-Save and exit. Close `caddy` if it's still running, then:
-```
-$ systemctl start caddy
-# If everything works, you should get an output. Verify with:
-$ systemctl status caddy
-# Which should produce something similar to the previous output.
-# To have caddy start automatically at reboot:
-$ systemctl enable caddy
-# If you want to stop caddy:
-$ systemctl stop caddy
-# If you want to edit your Caddfile, edit it, then run:
-$ caddy reload
-```
 
-## Troubleshooting
+### Troubleshooting
 Make sure your joystream node accept connections from your domain, use the flag `--rpc-cors` flag i.e. `--rpc-cors all`.

+ 197 - 0
working-groups/storage-group/NodeSteup/README.md

@@ -0,0 +1,197 @@
+# Overview
+
+The instructions below will assume you are running as `root`.
+Note that this has been tested on a fresh images of `Ubuntu 20.04 LTS`.
+
+Please note that unless there are any openings for new storage providers (which you can check in [Pioneer](https://dao.joystream.org/#/working-groups/storage) , you will not be able to join.
+
+# Upgrade
+
+To upgrade the node please  [go here for the upgrade guide](./Upgrade/README.md)
+
+# Min Requirement
+
+## Hardware
+- CPU: 6 Core
+- RAM: 16G
+- Storage: 2T SSD
+- Bandwidth: 1G
+
+## Location
+No more that 15% of the current operator clustered at the same region.
+
+# Initial setup
+
+
+
+```
+$ apt-get update && apt-get upgrade -y
+$ apt install vim git curl -y
+```
+
+## Setup hosting
+[Go here for the installation guide](./hosting/README.md)
+## Setup joystream-node
+[Go here for the installation guide](./joystream-node/README.md)
+## Setup Query Node
+[Go here for the installation guide](./query-node/README.md)
+
+
+# Applying for a Storage Provider opening
+
+Click [here](https://dao.joystream.org/#/working-groups/storage) to open the `Pioneer app` in your browser. 
+
+Make sure to save the `5YourJoyMemberAddress.json` file. This key will require tokens to be used as stake for the `Storage Provider` application (`application stake`) and further stake may be required if you are selected for the role (`role stake`).
+During this process you will be provided with a role key, which will be made available to download in the format `5YourStorageRoleKey.json`. If you set a password for this key, remember it :)
+
+The next steps (below) will only apply if you are a successful applicant.
+
+# Setup and Configure the Storage Node
+
+## Keys
+- Member key
+- Role key
+- Operator key: in the codebase it's referred to as the transactor key.
+
+```
+$ mkdir ~/keys/
+$ cd ~/joystream/
+$ yarn joystream-cli account:create
+
+# give it the name:
+  storage-operator-key
+
+# this guide assumes you don't set a password
+
+cat /root/.local/share/joystream-cli/accounts/storage-operator-key.json
+```
+This will give show you the address:
+`..."address":"5StorageOperatorKey"...`
+
+
+```
+# Go the directory where you saved your <5YourStorageRoleKey.json>, then rename it to
+
+storage-role-key.json
+#copy the role key to your keys directory, the below if you are copying from another server.
+$ scp storage-role-key.json <user>@<your.vps.ip.address>:/root/keys/
+```
+
+**Make sure your [Joystream full node](#Setup-joystream-node) and [Query Node](#Setup-Query-Node) is fully synced before you move to the next step(s)!**
+
+## Install and Setup the  Node
+> If you have done this on the query node setup, you can skip this section.
+
+```
+$ git clone https://github.com/Joystream/joystream.git
+$ cd joystream
+$ ./setup.sh
+# this requires you to start a new session. if you are using a vps:
+$ exit
+$ ssh user@ipOrURL
+$ cd joystream
+$ ./build-packages.sh
+$ yarn storage-node --help
+```
+
+## Accept Invitation
+Once hired, the Storage Lead will invite you a to "bucket". Before this is done, you will not be able to participate. Assuming:
+- your Worker ID is `<workerId>`
+- the Lead has invited to bucket `<bucketId>`
+
+```
+$ cd ~/joystream
+yarn run storage-node operator:accept-invitation -i <bucketId> -w <workerId> -t <5StorageRolerKey> --password=YourKeyPassword -k /root/keys/storage-role-key.json
+
+# With bucketId=1, workerId=2, and operatorkey=5StorageOperatorKey that would be:
+# yarn run storage-node operator:set-metadata -i 1 -w 2 -t 5StorageOperatorKey -k /root/keys/storage-role-key.json
+```
+
+## Set Metadata
+When you have accepted the invitation, you have to set metadata for your node. If your VPS is in Frankfurt, Germany:
+
+```
+$ nano metadata.json
+# Modify, and paste in everything below the stapled line
+---
+{
+  "endpoint": "https://<your.cool.url>/storage/",
+  "location": {
+    "countryCode": "DE",
+    "city": "Frankfurt",
+    "coordinates": {
+      "latitude": 52,
+      "longitude": 15
+    }
+  },
+  "extra": "<Node ID>: <Location>, Xcores, <RAM>G, <SSD>G "
+}
+```
+Where:
+- The location should really be correct, [IPLocation](https://www.iplocation.net/)
+- extra is not that critical. It could perhaps be nice to add some info on your max capacity?
+
+Then, set it on-chain with:
+```
+$ cd ~/joystream
+$ yarn run storage-node operator:set-metadata -i <bucketId> -w <workerId> -j /path/to/metadata.json -k /root/keys/storage-role-key.json
+
+# With bucketId=1, workerId=2, that would be:
+# yarn run storage-node operator:set-metadata -i 1 -w 2 -j /path/to/metadata.json -k /root/keys/storage-role-key.json
+```
+
+## Deploy the Storage Node
+First, create a `systemd` file. Example file below:
+
+```
+$ nano /etc/systemd/system/storage-node.service
+
+# Modify, and paste in everything below the stapled line
+---
+[Unit]
+Description=Joystream Storage Node
+After=network.target joystream-node.service
+
+[Service]
+User=root
+WorkingDirectory=/root/joystream/
+LimitNOFILE=10000
+ExecStart=/root/.volta/bin/yarn storage-node server \
+        -u ws://localhost:9944 \
+        -w <workerId> \
+        -o 3333 \
+        -l /<root/joystream-storage>/log/ \
+        -d /<root/joystream-storage> \
+        -q http://localhost:8081/graphql \
+	-p <Passowrd> \
+        -k /root/keys/storage-role-key.json \
+        -s
+Restart=on-failure
+StartLimitInterval=600
+
+[Install]
+WantedBy=multi-user.target
+```
+
+If you (like most) have needed to buy extra storage volume, remember to set `-d /path/to/volume`
+Save and exit.
+
+```
+$ systemctl start storage-node
+# If everything works, you should get an output. Verify with:
+$ journalctl -f -n 200 -u storage-node
+
+# If it looks ok, it probably is :)
+---
+
+# To have colossus start automatically at reboot:
+$ systemctl enable storage-node
+# If you want to stop the storage node, either to edit the storage-node.service file or some other reason:
+$ systemctl stop storage-node
+```
+
+### Verify everything is working
+
+In your browser, try:
+`https://<your.cool.url>/storage/api/v1/state/data`.
+

+ 55 - 0
working-groups/storage-group/NodeSteup/Upgrade/README.md

@@ -0,0 +1,55 @@
+# Upgrade 
+## Go to the Joystream root directory
+```
+cd joystream
+```
+## Back up your config files 
+```
+cp .env /someBackupLocation  //just to save old params
+cp <root to folder>/storage-node/metadata.json /someBackupLocation
+```
+## Stop the distribution service 
+```
+systemctl stop storage-node.service
+```
+## Stop the query node
+```
+./query-node/kill.sh
+```
+## Get the lastest and greatest repo
+```
+git stash
+git pull
+```
+
+## apply .env sh - you can use values from old backup file
+
+## Run the setup script
+```
+ ./setup.sh
+```
+## logout here and login back 
+
+## Build
+
+```
+./build-packages.sh 
+```
+## Start the services
+```
+query-node/start.sh
+systemctl start storage-node.service
+```
+
+## Verify
+### Verify that indexer
+```
+docker ps
+docker logs -f -n 100 indexer
+docker logs -f -n 100 processor
+```
+
+### Verify  
+```
+https://<your.cool.url>/storage/api/v1/state/data
+```

+ 46 - 0
working-groups/storage-group/NodeSteup/hosting/Caddyfile

@@ -0,0 +1,46 @@
+# Joystream-node
+wss://joystream2.yyagi.cloud/rpc {
+        reverse_proxy localhost:9944
+}
+
+
+# Query-node
+https://<your.cool.url> {
+        log {
+                output stdout
+        }
+        route /server/* {
+                uri strip_prefix /server
+                reverse_proxy localhost:8081
+        }
+        route /graphql {
+                reverse_proxy localhost:8081
+        }
+        route /graphql/* {
+                reverse_proxy localhost:8081
+        }
+        route /gateway/* {
+                uri strip_prefix /gateway
+                reverse_proxy localhost:4000
+        }
+        route /@apollographql/* {
+                reverse_proxy localhost:8081
+        }
+}
+# Storage Node
+https://<your.cool.url>/storage/* {
+        log {
+                output stdout
+        }
+        route /storage/* {
+                uri strip_prefix /storage
+                reverse_proxy localhost:3333
+        }
+        header /storage/api/v1/ {
+                Access-Control-Allow-Methods "GET, PUT, HEAD, OPTIONS"
+                Access-Control-Allow-Headers "GET, PUT, HEAD, OPTIONS"
+        }
+        request_body {
+                max_size 10GB
+        }
+}

+ 137 - 0
working-groups/storage-group/NodeSteup/hosting/README.md

@@ -0,0 +1,137 @@
+
+In order to allow for users to upload and download, you have to setup hosting, with an actual domain as both Chrome and Firefox requires `https://`. If you have a "spare" domain or subdomain you don't mind using for this purpose, go to your domain registrar and point your domain to the IP you want. If you don't, you will need to purchase one.
+
+# Caddy
+To configure SSL-certificates the easiest option is to use [caddy](https://caddyserver.com/), but feel free to take a different approach. Note that if you are using caddy for commercial use, you need to acquire a license. Please check their terms and make sure you comply with what is considered personal use.
+
+For the best setup, you should use the "official" [documentation](https://caddyserver.com/docs/).
+
+The instructions below are for Caddy v2.4.1:
+```
+$ wget https://github.com/caddyserver/caddy/releases/download/v2.4.6/caddy_2.4.6_linux_amd64.tar.gz
+$ tar -vxf caddy_2.4.6_linux_amd64.tar.gz
+$ mv caddy /usr/bin/
+# Test that it's working:
+$ caddy version
+```
+
+# Configure the `Caddyfile`:
+```
+$ nano ~/Caddyfile
+# Modify, and paste in everything below the stapled line
+---
+# Joystream-node
+wss://<your.cool.url>/rpc {
+        reverse_proxy localhost:9944
+}
+
+# Query-node
+https://<your.cool.url> {
+        log {
+                output stdout
+        }
+        route /server/* {
+                uri strip_prefix /server
+                reverse_proxy localhost:8081
+        }
+        route /graphql {
+                reverse_proxy localhost:8081
+        }
+        route /graphql/* {
+                reverse_proxy localhost:8081
+        }
+        route /gateway/* {
+                uri strip_prefix /gateway
+                reverse_proxy localhost:4000
+        }
+        route /@apollographql/* {
+                reverse_proxy localhost:8081
+        }
+}
+# Storage Node
+https://<your.cool.url>/storage/* {
+        log {
+                output stdout
+        }
+        route /storage/* {
+                uri strip_prefix /storage
+                reverse_proxy localhost:3333
+        }
+        header /storage/api/v1/ {
+                Access-Control-Allow-Methods "GET, PUT, HEAD, OPTIONS"
+                Access-Control-Allow-Headers "GET, PUT, HEAD, OPTIONS"
+        }
+        request_body {
+                max_size 10GB
+        }
+}
+
+
+```
+# Check
+Now you can check if you configured correctly, with:
+```
+$ caddy validate ~/Caddyfile
+# Which should return:
+--
+...
+Valid configuration
+--
+# You can now run caddy with:
+$ caddy run --config /root/Caddyfile
+# Which should return something like:
+--
+...
+... [INFO] [<your.cool.url>] The server validated our request
+... [INFO] [<your.cool.url>] acme: Validations succeeded; requesting certificates
+... [INFO] [<your.cool.url>] Server responded with a certificate.
+... [INFO][<your.cool.url>] Certificate obtained successfully
+... [INFO][<your.cool.url>] Obtain: Releasing lock
+```
+
+# Run caddy as a service
+To ensure high uptime, it's best to set the system up as a `service`.
+
+Example file below:
+
+```
+$ nano /etc/systemd/system/caddy.service
+
+# Modify, and paste in everything below the stapled line
+---
+[Unit]
+Description=Caddy
+Documentation=https://caddyserver.com/docs/
+After=network.target
+
+[Service]
+User=root
+ExecStart=/usr/bin/caddy run --config /root/Caddyfile
+ExecReload=/usr/bin/caddy reload --config /root/Caddyfile
+TimeoutStopSec=5s
+LimitNOFILE=1048576
+LimitNPROC=512
+PrivateTmp=true
+ProtectSystem=full
+AmbientCapabilities=CAP_NET_BIND_SERVICE
+
+[Install]
+WantedBy=multi-user.target
+```
+Save and exit. Close `caddy` if it's still running, then:
+```
+$ systemctl start caddy
+# If everything works, you should get an output. Verify with:
+$ systemctl status caddy
+# Which should produce something similar to the previous output.
+# To have caddy start automatically at reboot:
+$ systemctl enable caddy
+# If you want to stop caddy:
+$ systemctl stop caddy
+# If you want to edit your Caddfile, edit it, then run:
+$ caddy reload
+```
+
+
+
+```

+ 18 - 0
working-groups/storage-group/NodeSteup/hosting/caddy.service

@@ -0,0 +1,18 @@
+[Unit]
+Description=Caddy
+Documentation=https://caddyserver.com/docs/
+After=network.target
+
+[Service]
+User=root
+ExecStart=/usr/bin/caddy run --config /root/Caddyfile
+ExecReload=/usr/bin/caddy reload --config /root/Caddyfile
+TimeoutStopSec=5s
+LimitNOFILE=1048576
+LimitNPROC=512
+PrivateTmp=true
+ProtectSystem=full
+AmbientCapabilities=CAP_NET_BIND_SERVICE
+
+[Install]
+WantedBy=multi-user.target

+ 167 - 0
working-groups/storage-group/NodeSteup/joystream-node/README.md

@@ -0,0 +1,167 @@
+# Release
+
+Find the lasted release [here](https://github.com/Joystream/joystream/releases)
+
+
+# Setup 
+
+## Run Node
+
+```
+$ cd ~/
+$ mkdir joystream-node
+$ cd joystream-node
+# 64 bit debian based Linux
+$ wget https://github.com/Joystream/joystream/releases/download/v10.7.1/joystream-node-6.7.0-bdec855-x86_64-linux-gnu.tar.gz
+$ tar -vxf joystream-node-6.7.0-bdec855-x86_64-linux-gnu.tar.gz
+$ mv joystream-node /usr/local/bin/
+$ wget https://github.com/Joystream/joystream/releases/download/v10.5.0/joy-testnet-6.json
+# Test is it working. 
+$ joystream-node --chain joy-testnet-6.json --pruning archive --validator
+```
+- If you want your node to have a non-random identifier, add the flag:
+  - `--name <nodename>`
+- If you want to get a more verbose log output, add the flag:
+  - `--log runtime,txpool,transaction-pool,trace=sync`
+
+Your node should now start syncing with the blockchain. The output should look like this:
+```
+Joystream Node
+  version "Version"-"your_OS"
+  by Joystream contributors, 2019-2020
+Chain specification: "Joystream Version"
+Node name: "nodename"
+Roles: AUTHORITY
+Initializing Genesis block/state (state: "0x…", header-hash: "0x…")
+Loading GRANDPA authority set from genesis on what appears to be first startup.
+Loaded block-time = BabeConfiguration { slot_duration: 6000, epoch_length: 100, c: (1, 4), genesis_authorities: ...
+Creating empty BABE epoch changes on what appears to be first startup.
+Highest known block at #0
+Local node identity is: "peer id"
+Starting BABE Authorship worker
+Discovered new external address for our node: /ip4/"IP"/tcp/30333/p2p/"peer id"
+New epoch 0 launching at block ...
+...
+...
+Syncing, target=#"block_height" ("n" peers), best: #"synced_height" ("hash_of_synced_tip"), finalized #0 ("hash_of_finalized_tip"), ⬇ "download_speed"kiB/s ⬆ "upload_speed"kiB/s
+```
+From the last line, notice `target=#"block_height"` and `best: #"synced_height"`
+When the `target=#block_height`is the same as `best: #"synced_height"`, your node is fully synced!
+
+**Keep the terminal window open.** or recommended to [Run as a service](#run-as-a-service)
+
+
+## Configure the service
+
+Either as root, or a user with sudo privileges. If the latter, add `sudo` before commands.
+
+```
+$ cd /etc/systemd/system
+# you can choose whatever name you like, but the name has to end with .service
+$ touch joystream-node.service
+# open the file with your favorite editor (I use nano below)
+$ nano joystream-node.service
+```
+
+#### Example with user joystream
+
+The example below assumes the following:
+- You have setup a user `joystream` to run the node
+
+```
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=joystream
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target
+```
+
+#### Example as root
+
+The example below assumes the following:
+- You have setup a user `root` to run the node
+
+```
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=root
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Starting the service
+
+You can add/remove any `flags` as long as you remember to include `\` for every line but the last. Also note that systemd is very sensitive to syntax, so make sure there are no extra spaces before or after the `\`.
+
+After you are happy with your configuration:
+
+```
+$ systemctl daemon-reload
+# this is only strictly necessary after you changed the .service file after running, but chances are you will need to use it once or twice.
+# if your node is still running, now is the time to kill it.
+$ systemctl start joystream-node
+# if everything is correctly configured, this command will not return anything.
+# To verify it's running:
+$ systemctl status joystream-node
+# this will only show the last few lines. To see the latest 100 entries (and follow as new are added)
+$ journalctl -n 100 -f -u joystream-node
+
+# To make the service start automatically at boot:
+$ systemctl enable joystream-node
+```
+You can restart the service with:
+- `systemctl restart joystream-node`
+
+If you want to change something (or just stop), run:
+- `systemctl stop joystream-node`
+
+Before you make the changes. After changing:
+
+```
+$ systemctl daemon-reload
+$ systemctl start joystream-node
+```
+
+### Errors
+
+If you make a mistake somewhere, `systemctl start joystream-node` will prompt:
+```
+Failed to start joystream-node.service: Unit joystream-node.service is not loaded properly: Invalid argument.
+See system logs and 'systemctl status joystream-node.service' for details.
+```
+Follow the instructions, and see if anything looks wrong. Correct it, then:
+
+```
+$ systemctl daemon-reload
+$ systemctl start joystream-node
+```

+ 20 - 0
working-groups/storage-group/NodeSteup/joystream-node/joystream-node.service

@@ -0,0 +1,20 @@
+[Unit]
+Description=Joystream Node
+After=network.target
+
+[Service]
+Type=simple
+User=root
+WorkingDirectory=/<path to work directory>/joystream-node/
+ExecStart=joystream-node \
+        --chain /<path to work directory>/joystream-node/joy-testnet-6.json \
+        --pruning archive \
+        --validator \
+        --name <memberId-memberHandle> \
+        --log runtime,txpool,transaction-pool,trace=sync
+Restart=on-failure
+RestartSec=3
+LimitNOFILE=10000
+
+[Install]
+WantedBy=multi-user.target

+ 51 - 0
working-groups/storage-group/NodeSteup/monitoring/README.md

@@ -0,0 +1,51 @@
+# Configure Joystream Node
+
+
+
+
+## Storage Node
+
+In /etc/systemd/system/storage-node.service  add the -e flag
+```
+ExecStart=/root/.volta/bin/yarn storage-node server \
+        -u ws://localhost:9944 \
+        -w <workerId> \
+        -o 3333 \
+        -l /<root/joystream-storage>/log/ \
+        -d /<root/joystream-storage> \
+        -q http://localhost:8081/graphql \
+        -p <Passowrd> \
+        -k /root/keys/storage-role-key.json \
+        -e https://<elasticsearch.your.cool.url> \
+        -s
+```
+## Configure Packetbeat and Metricbeat
+
+```
+git clone https://github.com/yasiryagi/elasticsearch-docker.git
+cd elasticsearch-docker/client/
+```
+
+Edit config/packetbeat/packetbeat.yml:
+* name:  NodeName_SP 
+* tags: ["SP", "Your Node Name", "PB"]
+
+* packetbeat.interfaces.device: Your device interface
+* hosts : The elasticsearch host
+* username: Username provided by the admin
+* password: Username provided by the admin
+
+
+Edit config/metricbeat/metricbeat.yml:
+* name:  NodeName_SP
+* tags: ["SP", "Your Node Name", "MB"]
+
+* hosts : The elasticsearch host
+* username: Username provided by the admin
+* password: Username provided by the admin
+
+## Start the containers
+
+```
+docker-compose up -d
+```

+ 70 - 0
working-groups/storage-group/NodeSteup/monitoring/config/metricbeat/metricbeat.yml

@@ -0,0 +1,70 @@
+name: "YYAGI2_SP"
+tags: ["SP", "YYAGI", "MB"]
+
+metricbeat.config.modules:
+  path: ${path.config}/modules.d/*.yml
+  reload.enabled: false
+
+metricbeat.autodiscover:
+  providers:
+    - type: docker
+      hints.enabled: true
+
+metricbeat.modules:
+- module: system
+  metricsets:
+    - "cpu"
+    - "load"
+    - "filesystem"
+    - "fsstat"
+    - "memory"
+    - "network"
+    - "process"
+    - "core"
+    - "diskio"
+    - "socket"
+  period: 5s
+  enabled: true
+  processes: ['.*']
+  cpu.metrics:  ["percentages"]
+  core.metrics: ["percentages"]
+  process.cgroups.enabled: false
+
+#metricbeat.modules:
+#- module: docker
+#  metricsets:
+#    - "container"
+#    - "cpu"
+#    - "diskio"
+#    - "healthcheck"
+#    - "info"
+#    #- "image"
+#    - "memory"
+#    - "network"
+#  hosts: ["unix:///var/run/docker.sock"]
+#  period: 10s
+#  enabled: true
+ 
+hostfs: "/hostfs"
+
+output.elasticsearch:
+  hosts: ["https://elastic.joystreamstats.live:443"]
+  protocol: "https"
+  pipeline: geoip-info
+  #api_key: "MHJ6bVhvRUJybmlHM2NJSC16cmw6OE1CMmdpeXFSaTZjR1B0M3cxeFBfQQ=="
+  username: "beats_admin"
+  password: "*****"
+  # username: beats_system
+  # Read PW from packetbeat.keystore
+  # password: "*****"
+  # ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+
+#setup.kibana:
+  #host: "https://kibana.yyagi.cloud"
+  #username: "beats_admin"
+  #password: "L3tM31n!"
+#   protocol: "https"
+#   ssl.enabled: false
+#   ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+# 
+xpack.monitoring.enabled: true

+ 37 - 0
working-groups/storage-group/NodeSteup/monitoring/config/packetbeat/packetbeat.yml

@@ -0,0 +1,37 @@
+
+name: "YYAGI2_SP"
+tags: ["SP", "YYAGI", "PB"]
+
+packetbeat.interfaces.device: enp41s0
+
+packetbeat.flows:
+  timeout: 30s
+  period: 10s
+
+packetbeat.protocols:
+- type: dns
+  ports: [53]
+
+- type: http
+  ports: [80, 8080, 8000, 5000, 8002]
+
+- type: tls
+  ports: [443, 993, 995, 5223, 8443, 8883, 9243]
+        
+
+output.elasticsearch:
+  hosts: ["https://elastic.joystreamstats.live:443"]
+  protocol: "https"
+  pipeline: geoip-info
+  #api_key: "MHJ6bVhvRUJybmlHM2NJSC16cmw6OE1CMmdpeXFSaTZjR1B0M3cxeFBfQQ=="
+  username: "beats_admin"
+  password: "*********"
+#setup.kibana:
+  #host: "https://kibana.yyagi.cloud"
+  #username: "beats_admin"
+  #password: "L3tM31n!"
+#   protocol: "https"
+#   ssl.enabled: false
+#   ssl.certificate_authorities: ["/usr/share/packetbeat/certs/ca/ca.crt"]
+
+xpack.monitoring.enabled: true

+ 42 - 0
working-groups/storage-group/NodeSteup/monitoring/docker-compose.yml

@@ -0,0 +1,42 @@
+version: '3.6'
+services:
+  packetbeat:
+    image: docker.elastic.co/beats/packetbeat:8.2.3
+    container_name: packetbeat
+    cap_add: ['NET_RAW', 'NET_ADMIN']
+    network_mode: host
+    user: root
+    command: -e -E 'output.elasticsearch.hosts=["https://elastic.joystreamstats.live:443"]'
+    command: --strict.perms=false -e -E output.elasticsearch.hosts="https://elastic.joystreamstats.live:443" # -e flag to log to stderr and disable syslog/file output
+    secrets:
+      - source: packetbeat.yml
+        target: /usr/share/packetbeat/packetbeat.yml
+    healthcheck:
+      test: packetbeat test config
+      interval: 30s
+      timeout: 15s
+      retries: 5
+  metricbeat:
+    image: docker.elastic.co/beats/metricbeat:8.2.3
+    container_name: metricbeat
+    network_mode: host
+    user: root
+    command: --strict.perms=false -system.hostfs=/hostfs -e -E output.elasticsearch.hosts="https://elastic.joystreamstats.live:443" # -e flag to log to stderr and disable syslog/file output
+    volumes:
+      - /proc:/hostfs/proc:ro
+      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
+      - /:/hostfs:ro
+      - /var/run/docker.sock:/var/run/docker.sock:ro
+    secrets:
+      - source: metricbeat.yml
+        target: /usr/share/metricbeat/metricbeat.yml
+    healthcheck:
+      test: metricbeat test config
+      interval: 30s
+      timeout: 15s
+      retries: 5
+secrets:  
+  packetbeat.yml:
+    file: ./config/packetbeat/packetbeat.yml
+  metricbeat.yml:
+    file: ./config/metricbeat/metricbeat.yml

+ 158 - 0
working-groups/storage-group/NodeSteup/query-node/README.md

@@ -0,0 +1,158 @@
+
+# Overview
+
+The following assumptions apply:
+1. You are `root`, and [cloning](#clone-the-repo) to `~/joystream`
+2. in most cases, you will want to run your own `joystream-node` on the same device, and this guide assumes you are.
+
+For instructions on how to set this up, go [here](../joystream-node/README.md). Note that you can disregard all the parts about keys before applying, and just install the software so it is ready to go. You do need to run with `--pruning=archive` though, and be synced past the blockheight you are exporting the db from.
+
+## Get Started
+You don't need to host your query-node, but if you're connecting to your own node, docker will not "find" it on localhost. So first, go to [Setup Hosting](../hosting/README.md).
+
+### Install Docker
+```
+sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+sudo echo \
+  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
+  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+sudo apt-get update
+sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
+```
+
+### Install a Newer Version of `docker-compose`
+The package manager `apt-get` installs an old version of `docker-compose`, that doesn't take the `.env` file format we have used. We recommend removing the old one, and install the new one, with:
+
+```
+$docker-compose version
+# if you see `1.29.2` skip to Deploy
+$ cd ~/
+$ apt-get remove docker-compose
+$ curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
+$ chmod +x /usr/local/bin/docker-compose
+$ ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
+```
+
+### Clone the Repo
+If you haven't already, clone the `Joystream/joystream` (mono)repo:
+
+```
+$ git clone https://github.com/Joystream/joystream.git
+$ cd joystream
+$ ./setup.sh
+# this requires you to start a new session. if you are using a vps:
+$ exit
+#
+# Login back again
+$ cd joystream
+$ ./build-packages.sh
+```
+The last command will take a while...
+
+
+
+
+### Deploy
+
+#### Set the Environment
+First, get your
+```
+$ cd ~/joystream
+$ nano .env
+# Change to make, where "old" line is commented out:
+---
+#JOYSTREAM_NODE_WS=ws://joystream-node:9944/
+JOYSTREAM_NODE_WS=wss://<your.cool.url>/rpc
+```
+
+#### Deploy - Easy
+Assuming you installed the newer version of [docker-compose](#install-a-newer-version-of-docker-compose):
+```
+$ cd ~/joystream
+$ query-node/start.sh
+```
+And you should be done!
+
+Go and [confirm everything is working](#confirm-everything-is-working)
+
+#### Deploy - Elaborate
+If you want to use a version of `docker-compose` older than 1.29.0:
+
+First, you need to edit the `.env` file some more:
+```
+$ cd ~/joystream
+$ nano .env
+# Change to make, where "old" line is commented out:
+---
+#COLOSSUS_QUERY_NODE_URL=http://graphql-server:${GRAPHQL_SERVER_PORT}/graphql
+COLOSSUS_QUERY_NODE_URL=http://graphql-server:4000/graphql
+
+#DISTRIBUTOR_QUERY_NODE_URL=http://graphql-server:${GRAPHQL_SERVER_PORT}/graphql
+DISTRIBUTOR_QUERY_NODE_URL=http://graphql-server:4000/graphql
+
+#PROCESSOR_INDEXER_GATEWAY=http://hydra-indexer-gateway:${HYDRA_INDEXER_GATEWAY_PORT}/graphql
+PROCESSOR_INDEXER_GATEWAY=http://hydra-indexer-gateway:4000/graphql
+```
+
+You are now ready to run a script that deploys the query node with `docker`.
+```
+$ cd ~/joystream
+$ nano deploy-qn.sh
+# paste in below:
+---
+#!/usr/bin/env bash
+set -e
+
+SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
+cd $SCRIPT_PATH
+
+# Bring up db
+docker-compose up -d db
+
+# Make sure we use dev config for db migrations (prevents "Cannot create database..." and some other errors)
+docker-compose run --rm --entrypoint sh graphql-server -c "yarn workspace query-node config:dev"
+# Migrate the databases
+docker-compose run --rm --entrypoint sh graphql-server -c "yarn workspace query-node-root db:prepare"
+docker-compose run --rm --entrypoint sh graphql-server -c "yarn workspace query-node-root db:migrate"
+
+# Start indexer and gateway
+docker-compose up -d indexer
+docker-compose up -d hydra-indexer-gateway
+
+# Start processor and graphql server
+docker-compose up -d processor
+docker-compose up -d graphql-server
+```
+Then, deploy!
+```
+
+$ chmod +x deploy-qn.sh
+./deploy-qn.sh
+```
+
+
+### Confirm Everything is Working
+```
+# Are all the 6 processes running?
+$ docker ps
+# Should return: graphql-server, processor, hydra-indexer-gateway, indexer, redis, db
+
+# Is it syncing?
+$ docker logs -f -n 100 processor
+# this should get all the blocks between 4191207 and the current height. It's fast :)
+
+$ docker logs -f -n 100 indexer
+# this should parse all the "interesting" events that the processor processes.
+```
+
+You can do a spotcheck to see if you have the correct storageBuckets:
+```
+curl 'localhost:8081/graphql' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: localhost:8081/graphql' --data-binary '{"query":"query {\n  storageBuckets {\n    id\n  }\n}"}' --compressed
+```
+
+Finally, if you included hosting of the `Query-node`, you can access the graphql server at `https://<your.cool.url>/server/graphql`.
+Note that you'd need to change `https://<your.cool.url>/graphql` address to `https://<your.cool.url>/server/graphql` as well for the server to be reached.
+
+
+### Troubleshooting
+Make sure your joystream node accept connections from your domain, use the flag `--rpc-cors` flag i.e. `--rpc-cors all`.

+ 23 - 0
working-groups/storage-group/NodeSteup/storage-node.service

@@ -0,0 +1,23 @@
+[Unit]
+Description=Joystream Storage Node
+After=network.target joystream-node.service
+
+[Service]
+User=root
+WorkingDirectory=/root/joystream/
+LimitNOFILE=10000
+ExecStart=/root/.volta/bin/yarn storage-node server \
+        -u ws://localhost:9944 \
+        -w 7 \
+        -o 3333 \
+        -l /root/.joystream-storage/log/ \
+        -d /root/.joystream-storage \
+        -q http://localhost:8081/graphql \
+        -p <password> \
+        -k /root/keys/storage-role-key.json \
+        -s
+Restart=on-failure
+StartLimitInterval=600
+
+[Install]
+WantedBy=multi-user.target

+ 74 - 0
working-groups/storage-group/SOP/README.md

@@ -0,0 +1,74 @@
+# Standard Operation procedure 
+## Run Query node
+All storage nodes are required to :
+- Run a QN locally in the storage node.
+- Provide a link to thier QN GrapqL.
+
+**Failure will result in the rewards reduced by %25**
+
+# Elastic search
+All storage nodes are required to:
+- Configure the Storage node to send metrics  to ES
+- Configure  metricbeat to send metrics  to ES
+- Configure  packetbeat to send metrics  to ES
+
+**Failure will result in the rewards reduced by %50**
+
+## Metadata in format
+All storage nodes are required to configure metadata as per the guide.
+
+**Failure will result in the rewards reduced by %25**
+
+## Keep a disk usage space less than 80%
+All storage nodes are required to Keep a disk usage space less than 80%.
+
+**Failure will result in:**
+- **Remove all Bags**
+- **Rewards reduced by %75**
+
+## Up time
+
+All storage nodes are required to
+- Monthly up time %98
+- weekly uptime %95
+
+**Failure will result in:**
+- **Remove all Bags**
+- **Rewards reduced by %50**
+
+Exception: exclude down time arranged with the lead in advance.
+
+## Down time (Hours): 
+**Failure will result in:**
+- **1 hr  : Disable new bags**
+- **3 hrs : Remove all Bags**
+- **24 hrs:  Disable rewards till the node is back in service and verified** 
+- **120 hrs: Evict worker**
+
+Exception: exclude down time arranged with the lead in advance.
+
+## Node not accepting upload (Hours):
+**Failure will result in:**
+- **1 hr : Disable new bags**
+- **3 hrs: Remove all Bags**
+- **24 hrs: Disable rewards till the node is back in service and verified** 
+- **120 hrs: Evict worker**
+
+Exception: exclude down time arranged with the lead in advance.
+
+## Node performance
+
+## Comply to new requirement by the council 
+All storage nodes are required to comply to any requirement by the council within 7 days. 
+
+**Failure will result in the rewards reduced by %25**
+
+## WG improvement 
+Each worker need  to develop a tool or a procedure that improve the group monthly.
+
+**Not comlying 3 times in a raw may result on eviction.
+
+# Ref
+- All rewards reduction is from full salary.
+- Reduction can happen till minimum rewards of 1 Joy. 
+- Reward reduction is reverted once the issue is addressed.

+ 27 - 0
working-groups/storage-group/leader/Budget.md

@@ -0,0 +1,27 @@
+# Budget
+
+[GraphQL](https://graphql-console.subsquid.io/?graphql_api=https://joystream2.yyagi.cloud/graphql)
+
+```
+{
+  electedCouncils {
+    electedAtBlock
+    endedAtBlock
+    endedAtTime
+    electedAtTime
+  }
+}
+```
+
+
+Go [here](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.joystream.org%3A9944#/explorer) and collect the hashes for begining and end of period.
+
+![image](https://user-images.githubusercontent.com/4862448/189320726-3cd78bbf-ac5f-4c1a-9cdc-dd652c6449ba.png)
+
+
+Go [here](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.joystream.org%3A9944#/chainstate):
+- StorageWorkingGroup
+- Budget
+- Paste the hash
+- Press "+
+![image](https://user-images.githubusercontent.com/4862448/189320458-ef1c163a-2c3a-449d-9937-ec585e8bcea7.png)

+ 58 - 0
working-groups/storage-group/leader/Commands.md

@@ -0,0 +1,58 @@
+# Overview
+```
+yarn joystream-cli working-groups:overview -g=storageProviders
+```
+
+## Disable Bucket
+```
+yarn storage-node leader:update-bucket-status -i 8 -s off -k /root/keys/storage-role-key.json -p xxxxxx
+```
+## remove bags from Bucket
+```
+for i in $(seq 2000 2710) ; do
+    yarn storage-node leader:update-bag -i dynamic:channel:$i -k /root/keys/storage-role-key.json -r 8 -p xxx
+done
+
+or 
+
+curl 'https://joystream2.yyagi.cloud/graphql'  \
+     -s \
+     -H 'Accept-Encoding: gzip, deflate, br'  \
+     -H 'Content-Type: application/json' \
+     -H 'Accept: application/json'  \
+     -H 'Connection: keep-alive'  \
+     -H 'DNT: 1'  \
+     -H 'Origin: https://joystream2.yyagi.cloud'  \
+     --data-binary '{"query":"query MyQuery { storageBuckets(where: {id_eq: 2}) {  bags { id } } }\n"}'   2>&1\
+     | jq . | grep dynamic | sed 's/"id"://g;s/"//g;s/ //g' > bags_file
+     
+for i in $(cat ~/bags_file) ; do
+    yarn storage-node leader:update-bag -i $i -k /root/keys/storage-role-key.json -r 1 -p xxxxx
+done
+```
+
+## Delete Bucket
+Can only delete empty buckets
+```
+yarn storage-node leader:remove-operator -i 8 -k /root/keys/storage-role-key.json -p xxxxx
+yarn storage-node leader:delete-bucket -i 8 -k /root/keys/storage-role-key.json -p xxxxx
+```
+
+## Evict worker 
+Make sure the bucket is empty and deleted
+```
+ yarn joystream-cli working-groups:evictWorker 7 --group=storageProviders
+ ```
+ 
+## Remove/add Bag to Bucket
+
+```
+yarn storage-node leader:update-bag -i dynamic:channel:2705 -k /root/keys/storage-role-key.json -r 17 -p xxxxxxx
+yarn storage-node leader:update-bag -i dynamic:channel:2706 -k /root/keys/storage-role-key.json -a 17 -p xxxxxxx
+
+```
+
+## Change rewards
+```
+joystream-cli working-groups:updateWorkerReward  8  6 --group=storageProviders
+```

+ 99 - 0
working-groups/storage-group/leader/GraphQL.md

@@ -0,0 +1,99 @@
+# GraphQL
+> URL: https://graphql-console.subsquid.io/?graphql_api=https://orion.joystream.org/graphql
+ 
+## Failed uploads
+ 
+ 
+ ```
+ {
+  storageDataObjects(limit: 3000, offset: 0, where: {isAccepted_eq: false, createdAt_gt: "2022-09-02T00:00:00.000Z", createdAt_lt: "2022-09-04T03:00:54.000Z"}) {
+    createdAt
+    id
+    storageBag {
+      storageBuckets {
+        id
+      }
+      id
+    }
+  }
+}
+```
+
+## Bucket
+
+### Bags in Storage Bucket
+```
+{
+  storageBuckets(where: {id_eq: "6"}) {
+    id
+    bags {
+      id
+    }
+  }
+}
+```
+
+### Bucket end point and worker 
+
+```
+{
+  storageBuckets(where: {id_eq: "2"}) {
+    id
+    acceptingNewBags
+    operatorMetadata {
+      nodeEndpoint
+      extra
+    }
+    operatorStatus {
+      ... on StorageBucketOperatorStatusActive {
+        __typename
+        workerId
+      }
+    }
+  }
+}
+
+
+```
+
+
+## Bags
+
+### Buckets asscoated with a bag
+```
+{
+  storageBags(where: {id_eq: "dynamic:channel:2000"}) {
+    storageBuckets {
+      id
+    }
+  }
+}
+```
+## Worker 
+
+```
+{
+  workers(where: {groupId_eq: "storageWorkingGroup", id_eq: "storageWorkingGroup-16"}) {
+    id
+    groupId
+    membershipId
+    membership {
+      handle
+    }
+    status {
+      ... on WorkerStatusActive {
+        phantom
+      }
+      ... on WorkerStatusLeaving {
+        __typename
+      }
+      ... on WorkerStatusLeft {
+        __typename
+      }
+      ... on WorkerStatusTerminated {
+        __typename
+      }
+    }
+  }
+}
+```

+ 40 - 0
working-groups/storage-group/leader/Initial_ setup_commands.md

@@ -0,0 +1,40 @@
+## Initial settings
+### global settings
+```
+yarn joystream-cli api:setQueryNodeEndpoint
+yarn joystream-cli account:import  --backupFilePath /path/to/lead-key.json
+yarn joystream-cli working-groups:setDefaultGroup -g storageProviders
+```
+> Set "global" storage limits to 2000 GB and 200000 files:
+```
+yarn storage-node leader:update-voucher-limits -s 2000000000000 -o 20000 -k /root/keys/storage-role-key.json
+```
+
+> Update/set the dynamic bag policy:
+```
+yarn storage-node leader:update-dynamic-bag-policy -t Channel -n 5 -k /root/keys/storage-role-key.json
+```
+
+> Update the bag limit:
+```
+yarn storage-node leader:update-bag-limit -l 10 -k /root/keys/storage-role-key.json
+```
+
+
+
+### hiring
+```
+yarn joystream-cli working-groups:createOpening -o ~/joystream_working_dir/Strorage_WG_Worker.json
+yarn joystream-cli working-groups:openings
+yarn joystream-cli working-groups:opening --id 1
+yarn joystream-cli working-groups:application 2
+yarn joystream-cli working-groups:application 3
+yarn joystream-cli working-groups:fillOpening --openingId 1 --applicationIds 2 --applicationIds 3
+yarn joystream-cli working-groups:overview
+yarn joystream-cli working-groups:createOpening -i ~/joystream_working_dir/Strorage_WG_Worker.json
+```
+### bucket mgmt
+```
+yarn storage-node leader:create-bucket -i 18 -n 20000 -s 1500000000000 -k /root/keys/storage-role-key.json -p xxxxxx
+yarn storage-node leader:update-bucket-status -i 18 -s off -k /root/keys/storage-role-key.json -p xxxxxx
+```

+ 49 - 0
working-groups/storage-group/leader/README.md

@@ -0,0 +1,49 @@
+## Overview 
+
+```
+root@joystream2 ~/joystream # yarn joystream-cli working-groups:overview -g=storageProviders
+yarn run v1.22.15
+$ /root/joystream/node_modules/.bin/joystream-cli working-groups:overview -g=storageProviders
+Initializing the query node connection (http://localhost:8081/graphql)...... done
+Initializing the api connection (ws://localhost:9944)...... done
+Current Group: storageProviders
+
+___________________ Group lead ___________________
+
+Member id:                    3098
+Member handle:                yyagi
+Role account:                 5CPhA9RkPnykLxGy1JVy1Y4x9YPSYq9iq48NAMzoKXX9mLG8
+
+____________________ Members _____________________
+
+Worker id     Member id     Member handle       Stake             Reward          Missed reward     Role account
+18            2098          0x2bc               40.0000 kJOY      10.0000 JOY     0                 5DX9Vv...g2bDSx
+16            3098          yyagi               100.0000 kJOY     21.0000 JOY     0                 5CPhA9...X9mLG8     ⭐ 🔑
+15            2137          kalpakci            1.0000 kJOY       6.0000 JOY      0                 5FxJ7z...y35Gcw
+14            3082          abramaria_          1.0000 kJOY       6.0000 JOY      0                 5EnCg3...GuNqK1
+13            2141          plycho              1.0000 kJOY       6.0000 JOY      0                 5FHAEH...rCeVH8
+11            458           sieemma             1.0000 kJOY       6.0000 JOY      0                 5FWcV6...B2uuSS
+10            1019          razumv              1.0000 kJOY       6.0000 JOY      0                 5G27n1...VWaopF
+9             3886          Craci_BwareLabs     1.0000 kJOY       6.0000 JOY      0                 5HDhUE...hAGoGZ
+8             3085          mmx1916             1.0000 kJOY       6.0000 JOY      0                 5HR3fa...V7LAi1
+5             1541          godshunter          1.0000 kJOY       6.0000 JOY      0                 5EPUmj...qDvGFv
+4             515           l1dev               1.0000 kJOY       6.0000 JOY      0                 5DFb8X...HiJTe8
+3             2130          maxlevush           1.0000 kJOY       6.0000 JOY      0                 5Gy9ei...rHQWPe
+2             2574          alexznet            1.0000 kJOY       6.0000 JOY      0                 5FpdYU...hoRFjD
+1             1305          joystreamstats      1.0000 kJOY       1.0000 JOY      0                 5C8BEb...vXhUP3
+
+_____________________ Legend _____________________
+
+⭐ - Leader
+🔑 - Role key available in CLI
+Done in 4.54s.
+```
+
+
+
+
+# Ref 
+> Storage dir :/root/.joystream-storage/
+
+> GraphQL URL : https://graphql-console.subsquid.io/?graphql_api=https://orion.joystream.org/graphql
+

+ 27 - 0
working-groups/storage-group/leader/Upload Test procedure.md

@@ -0,0 +1,27 @@
+Needed : 
+- URL : https://play.joystream.org/studio/videos
+- test bag ID: YYAGI:2705,2706 
+- test bag ID: 0x2bc:2222, 2223, 2227 
+- Or create new channel in Atlas
+
+
+Check buckets assigned to the bag 
+```
+{
+  storageBags(limit: 3000, offset: 0, where: {id_eq: "dynamic:channel:2705"}) {
+    storageBuckets {
+      id
+    }
+    id
+  }
+}
+```
+
+Remove all buckets and leave only allow bucket to be tested 
+
+```
+yarn storage-node leader:update-bag -i dynamic:channel:2705 -k /root/keys/storage-role-key.json -r 17 -p xxxxxxx
+yarn storage-node leader:update-bag -i dynamic:channel:2705 -k /root/keys/storage-role-key.json -r 14 -p xxxxxxx
+yarn storage-node leader:update-bag -i dynamic:channel:2706 -k /root/keys/storage-role-key.json -a 16 -p xxxxxxx
+
+```

+ 59 - 0
working-groups/storage-group/leader/mainnet/mainnet-questions-for-Leads.md

@@ -0,0 +1,59 @@
+# Mainnet Questions For Leads
+
+## Questions 
+
+1. Write step by step guide how the work within your WG will be organized in the first term of the mainent 
+ -  1. Council will make a proposal for hiring a Lead
+ -  2. Lead will be hired
+ -  3. Lead will deploy his node
+ -  4. Lead will hire a deputy.
+ -  5. Lead will hire monitoring node worker (if ES still in use)
+ -  4. lead will hire workers
+
+[Commands](https://github.com/yasiryagi/community-repo/blob/master/working-groups/storage-group/leader/Initial_%20setup_commands.md)
+
+2. Review [Gitbook Scores](https://joystream.gitbook.io/testnet-workspace/testnet/council-period-scoring/general-working-group-score) for your Work Group and tell: 
+* Which scores should be excluded? 
+  The below shoudl be excluded:
+  -  BOUNTY_SCORE
+  - Catastrophic Errors
+    *No openings
+  - Cancel the plan and summary report. Simplify and automate report. 
+  
+- Which scores should be added? 
+NA
+3. For each position of your WG in the mainnet, incl. the Lead: 
+- Develop the Job Descriptions and provide links to them.
+Please find the job descriptions here :[Job descriptions](https://github.com/yasiryagi/community-repo/tree/master/working-groups/storage-group/leader/opening)
+- Propose a stake amount in USD (and JOY) required by application for each position.  
+See the job descriptions above.
+4. If there will be less seats in the WG compared to the number of seats in the current testnet, which people will you hire? Propose your criteria.
+
+This should be purely based on skills and abilities including:
+ - Linux and devops skill level
+ - Machine specification
+ - Willing to commit time and expertise to develop tools and procedures i.e. addd value. 
+
+Preference will be made for testnet contributors who showed the above skills, reliability and commitment.
+ 
+6. How would you manage people who didn’t find their place in your WG, but who are still quite experienced? 
+- Create a waiting list.
+- Establish stringent practice of preformance measurement to weed out none performing members.  
+- Establish a practice of weekly activities to improve the performance. Mmebers who do not comply risk been replaced. 
+8. Propose a forecast of _capacity_ utilization for your WG over time (in terms of capacity required over time/ staff number required over time/ overall budget required over time). For example, for the Storage WG _capacity_ will be the total storage space avaliable across all servers. 
+
+The calculation below taking assumption from [here](https://gist.github.com/bedeho/1b231111596e25b215bc66f0bd0e7ccc)
+
+* Min per worker storage is 5TB with a preference for 7TB, max 10TB.
+* Initial min total number of workers avaiable should be `2.5 * Min requirement is replication requirement`. Current replication is 5, resulting on a min 5*2.5 ~= 13 nodes and capacity 65T to 130T, taking into account replication capacity 13T to 26T. 
+* At 5PB as per docuement above we will be looking at ~ 50 workers with 100TB per worker or ~ 500 workers with 10T per worker. 
+* Salaries: 
+  - Assuming initialy, 
+    * Worker: average 4 hrs weekly effort and a cost of $500 per worker.
+    * Deputy Lead: average 7.5 hrs weekly effort and a cost of $500 per worker.
+    * Lead: average 15 hrs weekly effort and a cost of $1000 per worker.
+  - Propose initial salaries as follow:
+    * Lead: $16k (8 Joy/block)
+    * Deputy lead: $6k (3 Joy/block)
+    * Worker: $2k (1 Joy/block)
+

+ 61 - 0
working-groups/storage-group/leader/opening/Strorage_WG_Deputy_Leader.json

@@ -0,0 +1,61 @@
+{
+    "applicationDetails": "Storage WG Deputy Lead",
+    "title": "Storage WG Deputy Lead needed",
+    "shortDescription": "Joystream Video DAO (https://joystream.org) is actively looking for a Storage WG Deputy Lead",
+    "description": "[Joystream Video DAO](https://joystream.org) is actively looking for a Storage WG Deputy Lead to join our [Pioneer Governance App](https://dao.joystream.org) community Storage team.\n\n 
+    **Deputy Lead should be willing to learn how to manage Lead's responsibilities, including::** \n\n
+    - Maximize the Storage WG scores \n\n 
+    - Ensure that Storage Providers are performing adequately \n\n 
+    - Analyze current storage capacities utilized within the Storage WG and develop steps and processes to improve and expand upon them \n\n  
+    - Prepares weekly reports/plans/summary in a format approved by Council and Jsgenesis",
+    **Who are you:** \n\n
+    - Experience in amanging team and help the team to reach it's objectives.\n\n
+    - Experienced with how to setup and maintain blockchain nodes \n\n
+    - Have access to highly performant and reliable IT infrastructure (dedicated servers) with storage capacity of 2TB and more \n\n 
+    - Skills: Linux, devops, bash, Docker/Docker-compose, nginx or Caddy, GraphQl, (nice to have prometheus/Grafana).
+    "applicationFormQuestions": [{
+            "question": "Are you currently employed?"
+        },
+        {
+            "question": "What is your timezone?"
+        },
+        {
+            "question": "Are you based in US?"
+        },
+        {
+            "question": "Do you have experience with Linux? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with Docker? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with devops and automation? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with GraphQl? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with blockchain nodes? If yes, please describe it"
+        },
+        {
+            "question": "Tell us a bit about yourself"
+        },
+        {
+            "question": "Your availability, in hours per day"
+        },
+        {
+            "question": "Do you understand the compensation model?"
+        },
+        {
+            "question": "Tell us a bit more about your experience in IT"
+        },
+        {
+            "question": "Your Discord"
+        }
+    ],
+    "stakingPolicy": {
+        "amount": 500000,
+        "unstakingPeriod": 43201
+    },
+    "rewardPerBlock": 4
+}

+ 67 - 0
working-groups/storage-group/leader/opening/Strorage_WG_Leader.json.json

@@ -0,0 +1,67 @@
+{
+    "applicationDetails": "Storage WG Lead",
+    "title": "Storage WG Lead needed",
+    "shortDescription": "Joystream Video DAO (https://joystream.org) is actively looking for a Storage WG Lead",
+    "description": "[Joystream Video DAO](https://joystream.org) is actively looking for a Storage WG Deputy Lead to join our [Pioneer Governance App](https://dao.joystream.org) community Storage team.\n\n 
+    **Lead should be willing to learn how to manage Lead's responsibilities, including::** \n\n
+    - Maximize the Storage WG scores \n\n 
+    - Evalute the system capacity and react to the needs \n\n 
+    - Recruit new worker and expand on storage capacity as needed \n\n
+    - Evalute the system capacity and react to the needs \n\n 
+    - Ensure that Storage Providers are performing adequately \n\n 
+    - Analyze current storage capacities utilized within the Storage WG and develop steps and processes to improve and expand upon them \n\n  
+    - Ensure tight collaboration across other work groups, including Builders, to drive Storage initiatives \n\n
+    - Mentor and train Storage WG members, seek to continually improve processes DAO-wide
+    - Prepares weekly reports/plans/summary in a format approved by Council and Jsgenesis",
+    
+    **Who are you:** \n\n
+    - Experience in manging team and help the team to reach it's objectives.
+    - Experienced with how to setup and maintain blockchain nodes \n\n
+    - Have access to highly performant and reliable IT infrastructure (dedicated servers) with storage capacity of 2TB and more \n\n 
+    - Skills: Linux, devops, bash, Docker/Docker-compose, nginx or Caddy, GraphQl, (nice to have prometheus/Grafana).
+    "applicationFormQuestions": [{
+            "question": "Are you currently employed?"
+        },
+        {
+            "question": "What is your timezone?"
+        },
+        {
+            "question": "Are you based in US?"
+        },
+        {
+            "question": "Do you have experience with Linux? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with Docker? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with devops and automation? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with GraphQl? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with blockchain nodes? If yes, please describe it"
+        },
+        {
+            "question": "Tell us a bit about yourself"
+        },
+        {
+            "question": "Your availability, in hours per day"
+        },
+        {
+            "question": "Do you understand the compensation model?"
+        },
+        {
+            "question": "Tell us a bit more about your experience in IT"
+        },
+        {
+            "question": "Your Discord"
+        }
+    ],
+    "stakingPolicy": {
+        "amount": 1000000,
+        "unstakingPeriod": 43201
+    },
+    "rewardPerBlock": 10
+}

+ 60 - 0
working-groups/storage-group/leader/opening/Strorage_WG_Worker.json

@@ -0,0 +1,60 @@
+{
+    "applicationDetails": "Storage WG Worker",
+    "title": "Storage WG Worker needed",
+    "shortDescription": "Joystream Video DAO (https://joystream.org) is actively looking for a Storage WG Worker",
+    "description": "[Joystream Video DAO](https://joystream.org) is actively looking for a Storage WG Worker to join our [Pioneer Governance App](https://dao.joystream.org) community Storage team.\n\n
+    **Who are we:**\n\n
+    - Joystream is an open-source blockchain project that will be truly user-governed. \n\n 
+    - [Storage](https://www.notion.so/joystream/Storage-9dc5a16444934dc4bda08b596bc15375) are a small groups of node maintainers helping to make Jsgenesis products works smoothly and reliably\n\n
+    **Who are you:** \n\n
+    - Experienced with how to setup and maintain blockchain nodes \n\n
+    - Have access to highly performant and reliable IT infrastructure (dedicated servers) with storage capacity of 2TB and more \n\n 
+    - Comply to the group standard operation procedures. \n\n
+    - Skills: Linux, devops, bash, Docker/Docker-compose, nginx or Caddy, GraphQl, (nice to have prometheus/Grafana).
+In order to apply for the opening, every applicant should make a test task (bounty) https://www.notion.so/joystream/Bounty-Storage-WG-Entry-Level-74b13f81a32d4efb811f8259d6fbeee0 ",
+    "applicationFormQuestions": [{
+            "question": "Are you currently employed?"
+        },
+        {
+            "question": "What is your timezone?"
+        },
+        {
+            "question": "Are you based in US?"
+        },
+        {
+            "question": "Do you have experience with Linux? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with Docker? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with devops and automation? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with GraphQl? If yes, please describe it"
+        },
+        {
+            "question": "Do you have experience with blockchain nodes? If yes, please describe it"
+        },
+        {
+            "question": "Tell us a bit about youself"
+        },
+        {
+            "question": "Your availability, in hours per day"
+        },
+        {
+            "question": "Do you understand the compensation model?"
+        },
+        {
+            "question": "Tell us a bit more about your experience in IT"
+        },
+        {
+            "question": "Your Discord"
+        }
+    ],
+    "stakingPolicy": {
+        "amount": 100000,
+        "unstakingPeriod": 43201
+    },
+    "rewardPerBlock": 2
+}

+ 11 - 0
working-groups/storage-group/leader/tools/README.md

@@ -0,0 +1,11 @@
+# JoystreamStroeageReport
+
+
+## Requirement 
+ 
+```
+ apt install pip -y	`
+ pip install requests
+ pip install tabulate 
+ pip install matplotlib
+```

+ 97 - 0
working-groups/storage-group/leader/tools/print_bags.py

@@ -0,0 +1,97 @@
+import requests
+import json
+#import csv
+from tabulate import tabulate
+from itertools import groupby
+from operator import itemgetter
+import numpy as np
+
+url = 'https://joystream2.yyagi.cloud/graphql'
+#url = 'https://query.joystream.org/graphql'
+file_name = "{}-12:00-objects.txt"
+file_server = "http://87.236.146.74:8000/"
+operators = [{'id':"0x2bc", 'bucket': 16},{'id':"alexznet", 'bucket': 2},{'id':"Craci_BwareLabs", 'bucket': 10},{'id':"GodsHunter", 'bucket': 6},{'id':"joystreamstats", 'bucket': 1},{'id':"l1dev", 'bucket': 4},{'id':"maxlevush", 'bucket': 3},{'id':"mmx1916", 'bucket': 9},{'id':"razumv", 'bucket': 11},{'id':"yyagi", 'bucket': 17}, {'id':"sieemma", 'bucket': 12} ]
+credential = {'username': '', 'password' :'joystream'}
+query_group = "storageWorkingGroup"
+max_backets = 5 
+
+#def queryGrapql(query, url= 'https://query.joystream.org/graphql' ):
+def queryGrapql(query, url= 'https://joystream2.yyagi.cloud/graphql' ):
+  headers = {'Accept-Encoding': 'gzip, deflate, br', 'Content-Type': 'application/json',
+           'Accept': 'application/json',  'Connection': 'keep-alive', 'DNT': '1',
+                   'Origin': 'https://query.joystream.org' }
+  response = requests.post(url, headers=headers, json=query)
+  return response.json()['data']
+
+
+def get_bags(start_time='', end_time=''):
+  if start_time and end_time :
+    query = {"query": 'query MyQuery {{ storageBags( limit: 33000, offset: 0, where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{  storageBuckets  {{ id }} id deletedAt  }} }}'.format(start_time, end_time) }
+  else:
+    query = {"query": 'query MyQuery { storageBags( limit: 33000, offset: 0) {  storageBuckets  { id } id deletedAt  }} ' }
+    data = queryGrapql(query)['storageBags']
+    return data
+
+def get_objects(start_time='',end_time=''):
+  if start_time and end_time :
+    query_created = {"query":'query MyQuery {{ storageBags(limit: 33000, offset: 0,where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{ storageBuckets  {{ id }} id }} }}'.format(start_time, end_time) }
+  else :
+    query_created = {"query":'query MyQuery { storageBags(limit: 33000, offset: 0) { storageBuckets  { id } id } }' }
+  objects_created  = queryGrapql(query_created)['storageBags']
+  for obj in objects_created:
+    obj['storageBagId'] = obj['storageBagId'].split(":")[2]
+  return objects_created
+
+
+def get_bags_nums(start_time = '', end_time = ''):
+  data_created, data_deleted = {},{}
+  if start_time and end_time :
+    query_created = {"query": 'query MyQuery {{ storageBags( limit: 33000, offset: 0, where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{  storageBuckets  {{ id }} id deletedAt }} }}'.format(start_time, end_time) }
+  else :
+    query_created = {"query": 'query MyQuery { storageBags(limit: 33000, offset:0) {  storageBuckets  { id } id deletedAt} }'}
+  data_created  = queryGrapql(query_created)['storageBags']
+  return data_created 
+
+def sort_bags_nums(data):
+  for record in data:
+    i = 0
+    for item in record['storageBuckets']:
+      record['storageBuckets_'+str(i)] = item['id']
+      i += 1
+    while i < max_backets :
+      record['storageBuckets_'+str(i)] = 'X'
+      i += 1
+    record.pop('storageBuckets')
+  return data 
+  
+def print_bags():
+  data = get_bags()
+  data = sort_bags_nums(data)
+  print_table(data, master_key = 'id', sort_key = 'id')
+
+def print_table(data, master_key = '', sort_key = ''):
+    if sort_key:
+        data = sorted(data, key = itemgetter(sort_key), reverse=True)
+    headers = [*data[0]]
+    if master_key:
+        headers.append(master_key)
+        headers.remove(master_key)
+        headers = [master_key] + headers
+    table = []
+    for line in data:
+        row = []
+        if master_key:
+            value = line.pop(master_key)
+            row.append(value)
+        for key in [*line]:
+            row.append(line[key])
+        table.append(row)
+    try:
+        result = tabulate(table, headers, tablefmt="github")
+        print(result)
+    except UnicodeEncodeError:
+        result = tabulate(table, headers, tablefmt="grid")
+        print(result)
+
+if __name__ == '__main__':
+  print_bags()

+ 617 - 0
working-groups/storage-group/leader/tools/report.py

@@ -0,0 +1,617 @@
+import requests
+import json
+#import csv
+from tabulate import tabulate
+from itertools import groupby
+from operator import itemgetter
+import numpy as np
+import matplotlib.pyplot as plt
+
+url = 'https://joystream2.yyagi.cloud/graphql'
+#url = 'https://query.joystream.org/graphql'
+file_name = "{}-12:00-objects.txt"
+file_server = "http://87.236.146.74:8000/"
+operators = [{'id':"0x2bc", 'bucket': 0},{'id':"alexznet", 'bucket': 2},{'id':"Craci_BwareLabs", 'bucket': 10},{'id':"GodsHunter", 'bucket': 6},{'id':"joystreamstats", 'bucket': 1},{'id':"l1dev", 'bucket': 4},{'id':"maxlevush", 'bucket': 3},{'id':"mmx1916", 'bucket': 9},{'id':"razumv", 'bucket': 11},{'id':"yyagi", 'bucket': 8}, {'id':"sieemma", 'bucket': 12} ]
+credential = {'username': '', 'password' :'joystream'}
+query_group = "storageWorkingGroup"
+
+#def queryGrapql(query, url= 'https://query.joystream.org/graphql' ):
+def queryGrapql(query, url= 'https://joystream2.yyagi.cloud/graphql' ):
+  headers = {'Accept-Encoding': 'gzip, deflate, br', 'Content-Type': 'application/json',
+           'Accept': 'application/json',  'Connection': 'keep-alive', 'DNT': '1', 
+		   'Origin': 'https://query.joystream.org' }
+  response = requests.post(url, headers=headers, json=query)
+  return response.json()['data']
+
+def get_councils_period(url):
+  query = {"query":'query MyQuery{ electedCouncils { electedAtBlock endedAtBlock endedAtTime electedAtTime } }'}
+  data  = queryGrapql(query, url)['electedCouncils']
+  #data = sorted(data, key = itemgetter('endedAtBlock'), reverse=True)
+  if data[-1]['endedAtTime'] == None:
+    data.pop(-1)
+  data = sorted(data, key = itemgetter('endedAtBlock'))
+  period = len(data)
+  return data[-1], data[-2], data[0], period
+
+def get_backets(url, start_time = '', end_time = '', createdat = False, deletedat = False):
+  if start_time and end_time :
+    if createdat :
+      query = {"query":'query MyQuery {{  storageBuckets ( where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}){{    id    dataObjectsSize    dataObjectsSizeLimit    dataObjectsCount    bags {{      id  createdAt  }}  }}}}'.format(start_time, end_time)}
+    elif deletedat:
+      query = {"query":'query MyQuery {{  storageBuckets ( where: {{deletedAt_gt: "{}" , deletedAt_lt: "{}"}}){{    id    dataObjectsSize    dataObjectsSizeLimit    dataObjectsCount    bags {{      id  createdAt  }}  }}}}'.format(start_time, end_time)}
+  else:
+    query = {"query":"query MyQuery {  storageBuckets {    id    dataObjectsSize    dataObjectsSizeLimit    dataObjectsCount    bags {      id  createdAt  }  }}"}
+  data  = queryGrapql(query, url)['storageBuckets']
+  for record in data:
+    record['bags'] = len(record['bags'])
+    record['Utilization'] = int(record['dataObjectsSize'])/int(record['dataObjectsSizeLimit'])
+    record['dataObjectsSize, GB'] = int(record['dataObjectsSize']) / 1074790400
+  #keys = list(data[0].keys())
+  #file_name= 'backets_info_'+ time.strftime("%Y%m%d%H%M%S")+'.csv'
+  # with open(file_name, 'w') as csvfile:
+  #  writer = csv.DictWriter(csvfile, fieldnames = keys)
+  #  writer.writeheader()
+  #  writer.writerows(data)
+  #return file_name
+  return data
+
+def get_rewards(start_time, end_time):
+  query = '{{ rewardPaidEvents(limit: 33000, offset: 0, where: {{group: {{id_eq: "storageWorkingGroup"}}, createdAt_gt: "{}", createdAt_lt: "{}"}}) {{ paymentType amount workerId }} }}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['rewardPaidEvents']
+  total = 0
+  result = []
+  sorted_data = sorted(data, key = itemgetter('workerId'))
+  for key, values in groupby(sorted_data, key = itemgetter('workerId')):
+    worker_total = 0
+    for value in list(values):
+      worker_total += int(value["amount"])
+    result.append({'workerId':key, 'worker_total':worker_total})
+    total += worker_total
+  return total,result
+
+def get_new_opening(start_time, end_time):
+  query = '{{ openingAddedEvents(where: {{group: {{id_eq: "storageWorkingGroup"}}, createdAt_gt: "{}", createdAt_lt: "{}"}}) {{ opening {{ createdAt id openingcanceledeventopening {{ createdAt }} }} }} }}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['openingAddedEvents']
+  result = []
+  if len(data) == 0:
+    return 0,result
+  for record in data:
+    if len(record['opening']['openingcanceledeventopening']) == 0:
+      result.append({'id': record['opening']['id'], 'createdAt': record['opening']['createdAt']})
+  length = len(result)
+  return length,result
+
+def get_new_hire(start_time, end_time):
+  query = '{{ openingFilledEvents(where: {{group: {{id_eq: "storageWorkingGroup"}}, createdAt_gt: "{}", createdAt_lt: "{}"}}) {{ createdAt  workersHired {{ id membershipId}}}}}}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['openingFilledEvents']
+  result = []
+  if len(data) == 0:
+    return 0,result
+  for record in data:
+    record['workersHired'][0]['createdAt'] = record['createdAt']
+    result.append(record['workersHired'][0])
+  length = len(result)
+  return length, result
+
+def get_slashes(start_time, end_time):
+  query = '{{ stakeSlashedEvents(where: {{group: {{id_eq: "storageWorkingGroup", createdAt_gt: "{}", createdAt_lt: "{}"}}}}) {{ createdAt worker {{ membershipId }} slashedAmount workerId }}}}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['stakeSlashedEvents']
+  length = len(data)
+  if length > 0:
+   for record in data:
+     record["worker"] = record["worker"]["membershipId"]
+  return length,data
+
+def get_termination(start_time, end_time):
+  query = '{{ terminatedWorkerEvents(where: {{group: {{id_eq: "storageWorkingGroup"}}, createdAt_gt: "{}", createdAt_lt: "{}"}}) {{createdAt workerId worker {{membershipId}} }}}}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['terminatedWorkerEvents']
+  length = len(data)
+  if length > 0:
+   for record in data:
+     record["worker"] = record["worker"]["membershipId"]
+  return length,data
+
+def get_bags_nums(start_time = '', end_time = ''):
+  if start_time and end_time :
+    query_created = {"query": 'query MyQuery {{ storageBags( limit: 33000, offset: 0, where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{  id }} }}'.format(start_time, end_time) }
+    query_deleted = {"query": 'query MyQuery {{ storageBags( limit: 33000, offset: 0, where: {{deletedAt_gt: "{}" , deletedAt_lt: "{}"}}) {{  id }} }}'.format(start_time, end_time) }
+  else :
+    query_created = {"query": 'query MyQuery { storageBags(limit: 3000, offset:0) {  id } }'}
+    query_deleted = {"query": 'query MyQuery { storageBags(limit: 3000, offset:0) {  id } }'}
+  data_created  = queryGrapql(query_created)['storageBags']
+  data_deleted  = queryGrapql(query_deleted)['storageBags']
+  num_bags_created = len(data_created)
+  num_bags_deleted = len(data_deleted)
+  return {"bag created": num_bags_created, "bags deleted": num_bags_deleted}
+ 
+def get_bags(start_time='', end_time=''):
+  if start_time and end_time :
+    query = {"query": 'query MyQuery {{ storageBags( limit: 33000, offset: 0, where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{  id createdAt deletedAt }} }}'.format(start_time, end_time) }
+  else:
+    query = {"query": 'query MyQuery { storageBags( limit: 33000, offset: 0) {  id createdAt deletedAt }} ' }
+    data = queryGrapql(query)['storageBags']
+    return len(data), data
+
+def get_objects(start_time='',end_time=''):
+  if start_time and end_time :
+    query_created = {"query":'query MyQuery {{ storageDataObjects(limit: 33000, offset: 0,where: {{createdAt_gt: "{}" , createdAt_lt: "{}"}}) {{ createdAt size id storageBagId }} }}'.format(start_time, end_time) }
+  else :
+    query_created = {"query":'query MyQuery { storageDataObjects(limit: 33000, offset: 0) { createdAt deletedAt size id storageBagId } }' }
+  objects_created  = queryGrapql(query_created)['storageDataObjects']
+  for obj in objects_created: 
+    obj['storageBagId'] = obj['storageBagId'].split(":")[2]
+  return objects_created
+  
+def get_objects_files(file_server, operators, end_date, credential):
+  result= []
+  file = end_date+"-12:00-objects.txt" 
+  for operator in operators:
+    url = file_server+operator['id']+"/"+file 
+    response = requests.get(url, auth=(credential['username'], credential['password']))
+    if response.status_code == 200 and not response.text.startswith('<!DOCTYPE html>'):
+      result.append({'operator':operator['id'], 'file': file, 'response': response.content}) 
+  return result 
+
+def load_objects(lines):
+  objects_file = []	
+  for line in lines:
+    if line.startswith('d') or line.startswith('total') or not line.strip():
+      continue
+    line_split = line.split(",")
+    objects_file.append({'size': line_split[4], 'id': line_split[8].strip('\n')})
+  return objects_file
+    
+def load_objects_from_server(data):
+  objects_file = []	
+  for operator in data:
+    opertor_response = operator['response'].decode("utf-8") 
+    lines = opertor_response.split('\r\n')
+    objects_file.append({'operator': operator['operator'],'objects':load_objects(lines)})
+  return objects_file
+  
+def load_objects_from_file(file_name):
+  objects_file = []	
+  with open(file_name) as f:
+    lines = f.readlines()
+  objects_file = objects_file = load_objects(lines)
+  return objects_file
+  
+def compare_objects(file_objects, objects):
+    lost = []
+    for obj in objects:
+      found = False
+      for file_obj in file_objects:
+        if obj['id'] == file_obj['id']:
+          found = True
+          break
+      if not found:
+        lost.append(obj)
+    return lost
+
+def get_lost(start_time, end_time):
+  query = '{{ storageDataObjects(limit: 3000, offset: 0, where: {{isAccepted_eq: false, createdAt_gt: "{}", createdAt_lt: "{}"}}) {{ createdAt size id storageBagId }}}}'.format(start_time, end_time)
+  query_dict = {"query": query}
+  data = queryGrapql(query_dict,url)['storageDataObjects']
+  for obj in data:
+    obj['storageBagId'] = obj['storageBagId'].split(":")[2]
+  length = len(data)
+  return length,data
+
+def objects_stats(start_time='',end_time=''):
+  data_created = get_objects(start_time,end_time)
+  num_objects_created = len(data_created)
+  total_size = 0
+  sizes = {'<10 MB': 0,'<100 MB': 0,'<1000 MB': 0,'<10000 MB': 0,'<100000 MB': 0,'<1000000 MB': 0}
+  sizes_range = {'0-10 MB': 0,'10-100 MB': 0,'100-1000 MB': 0,'1000-10000 MB': 0,'10000-100000 MB': 0,'100000-10000000 MB': 0}
+  total_size,sizes,sizes_range =get_0bjects_ranges(data_created,total_size,sizes,sizes_range)
+  bags_stats = bag_stats(data_created)
+  return num_objects_created, total_size,sizes,sizes_range,bags_stats
+ 
+def get_0bjects_ranges(data_created,total_size,sizes,sizes_range): 
+  for record in data_created:
+    size  = int(record['size'])
+    total_size += size
+    size = size / 1048576
+    if size < 10:
+      sizes['<10 MB'] += 1
+      sizes['<100 MB'] += 1
+      sizes['<1000 MB'] += 1
+      sizes['<10000 MB'] += 1
+      sizes['<100000 MB'] += 1
+      sizes['<1000000 MB'] += 1
+    elif size < 100:
+      sizes['<100 MB'] += 1
+      sizes['<1000 MB'] += 1
+      sizes['<10000 MB'] += 1
+      sizes['<100000 MB'] += 1
+      sizes['<1000000 MB'] += 1
+    elif size < 1000:
+      sizes['<1000 MB'] += 1
+      sizes['<10000 MB'] += 1
+      sizes['<100000 MB'] += 1
+      sizes['<1000000 MB'] += 1
+    elif size < 10000:
+      sizes['<10000 MB'] += 1
+      sizes['<100000 MB'] += 1
+      sizes['<1000000 MB'] += 1
+    elif size < 100000:
+      sizes['<100000 MB'] += 1
+      sizes['<1000000 MB'] += 1
+    else:
+      sizes['<1000000 MB'] += 1
+   
+    if size < 10:
+      sizes_range['0-10 MB'] += 1
+    elif size < 100:
+      sizes_range['10-100 MB'] += 1
+    elif size < 1000:
+      sizes_range['100-1000 MB'] += 1
+    elif size < 10000:
+      sizes_range['1000-10000 MB'] += 1
+    elif size < 100000:
+      sizes_range['10000-100000 MB'] += 1
+    else:
+      sizes_range['100000-10000000 MB'] += 1
+  return  total_size, sizes, sizes_range
+
+def get_grouped_obj_dates(data, action):
+  result = {}
+  data =  sorted(data, key = itemgetter(action))
+  for key, records in groupby(data, key = itemgetter(action)):
+    records = list(records)
+    size = 0
+    num_objects = len(records)
+    for record in records:
+      size += int(record['size'])
+    result[key] = { 'size': size, 'num_objects': num_objects}
+  return result
+
+def get_draw_objects(file1name, file2name):
+  data = get_objects()
+  created_objects = []
+  deleted_objects = []
+  for record in data:
+    record['createdAt'] =  record['createdAt'].split('T')[0]
+    created_objects.append({'createdAt': record['createdAt'], 'size': record['size']})
+    if record['deletedAt']:
+      record['deletedAt'] =  record['deletedAt'].split('T')[0]
+      deleted_objects.append({'deletedAt': record['deletedAt'], 'size': record['size']})
+  num_created_objects = len(created_objects)
+  num_deleted_objects = len(deleted_objects)
+
+  if num_created_objects > 0:
+    created_objects = get_grouped_obj_dates(created_objects, 'createdAt')
+  if num_deleted_objects > 0:
+    deleted_objects = get_grouped_obj_dates(deleted_objects, 'deletedAt')
+    for key, value in deleted_objects.items:
+      created_objects[key]['size'] -= value['size']
+      created_objects[key]['num_objects'] -= value['num_objects']
+  dates = list(created_objects.keys())
+  sizes = [round(int(k['size'])/1074790400, 2) for k in created_objects.values()]
+  for index, size in enumerate(sizes):
+    if index == 0:
+      continue
+    sizes[index] += sizes[index-1]
+  num_objects = [k['num_objects'] for k in created_objects.values()]
+  for index, num_object in enumerate(num_objects):
+    if index == 0:
+      continue
+    num_objects[index] += num_objects[index-1]  
+  
+
+  plot(dates[1:], sizes[1:], 'Size (Sum, GB)', 'Dates', 'Size', 0, 750 , 10, 25,file1name)
+  plot(dates[1:], num_objects[1:], 'Number of Objects', 'Dates', 'Number of Objects', 0, 12000, 10, 500,file2name)
+
+def plot(x, y, title, x_label, y_label, x_start, y_start, x_spacing, y_spacing,filename):
+  fig, ax = plt.subplots()
+  fig.set_size_inches(15, 10)
+  plt.plot(x, y)
+  ax.set_xticks(np.arange(x_start, len(x)+1, x_spacing))
+  ax.set_yticks(np.arange(y_start, max(y), y_spacing))
+  ax.set_title(title)
+  ax.set(xlabel=x_label, ylabel=y_label)
+  plt.xticks(rotation=45)
+  plt.yticks(rotation=45)
+  fig.set_dpi(100)
+  fig.savefig(filename)
+  plt.close()
+
+def get_created_deleted_bags(data):
+  created_bags = []
+  deleted_bags = []
+  for record in data:
+    record['createdAt'] =  record['createdAt'].split('T')[0]
+    created_bags.append({'createdAt': record['createdAt'], 'id': record['id']})
+    if record['deletedAt']:
+      record['deletedAt'] =  record['deletedAt'].split('T')[0]
+      deleted_bags.append({'deletedAt': record['deletedAt'], 'id': record['id']})
+  return created_bags,deleted_bags
+
+def get_draw_bags(filename):
+  num, data = get_bags()
+  created_bags ,deleted_bags = get_created_deleted_bags(data)
+  num_created_bags = len(created_bags)
+  num_deleted_bags = len(deleted_bags)
+  bags = {}
+  if num_created_bags > 0:
+    created_bags = sort_bags(created_bags, 'createdAt')
+    for key, record in created_bags.items():
+        bags[key] = len(record)
+  if num_deleted_bags > 0:
+    deleted_bags = sort_bags(deleted_bags, 'deletedAt')
+    for key, record in deleted_objects.items():
+      bags[key] -= len(record)
+  dates = list(bags.keys())
+  num_bags = [k for k in bags.values()]
+  for index, num_bag in enumerate(num_bags):
+    if index == 0:
+      continue
+    num_bags[index] += num_bags[index-1]
+  plot(dates[1:], num_bags[1:], 'Number of Bags {}'.format(num_created_bags - num_deleted_bags), 'Dates', 'Number of Bags', 0, 250 , 3, 50,filename)
+
+def sort_bags(data, key):
+  bags = {}
+  sorted_data = sorted(data, key = itemgetter(key))
+  for key, value in groupby(sorted_data, key = itemgetter(key)):
+    #key = key.split(":")[2]
+    bags[key]= list(value)
+  return(bags)
+ 
+def bag_stats(data_created): 
+  bags = sort_bags(data_created, 'storageBagId')
+  #print(bags)
+  result= []
+  for key, value in bags.items():
+    bag = {}
+    bag['id'] = key
+    total_size = 0
+    bag['objects_num'] = len(value)
+    for obj in value:
+      total_size += int(obj['size'])
+    bag['total_size bytes'] = total_size
+    bag['average_size bytes'] = int(total_size / bag['objects_num'])
+    result.append(bag)
+  return result
+
+def print_table(data, master_key = '', sort_key = ''):
+    if sort_key:
+        data = sorted(data, key = itemgetter(sort_key), reverse=True)
+    headers = [*data[0]]
+    if master_key:
+        headers.append(master_key)
+        headers.remove(master_key)
+        headers = [master_key] + headers
+    table = []
+    for line in data:
+        row = []
+        if master_key:
+            value = line.pop(master_key)
+            row.append(value)
+        for key in [*line]:
+            row.append(line[key])
+        table.append(row)
+    try:
+        result = tabulate(table, headers, tablefmt="github")
+        print(result)
+        return result
+    except UnicodeEncodeError:
+        result = tabulate(table, headers, tablefmt="grid")
+        print(result)
+        return result
+
+if __name__ == '__main__':
+  last_council,previous_council,first_council, period = get_councils_period(url)
+  report = ''
+  first_time = first_council['electedAtTime']
+  start_time = last_council['electedAtTime']
+  end_time   = last_council['endedAtTime']
+  start_date = start_time.split('T')[0]
+  end_date = end_time.split('T')[0]
+  previous_start_time = previous_council['electedAtTime']
+  previous_end_time   = previous_council['endedAtTime']
+  file_name = 'report-'+end_time 
+  print(start_time)
+  print(end_time)
+  print('Full report for the Term: {} \n\n'.format(period-1))
+  print('Start date: {} \n'.format(start_date))
+  print('End date: {} \n'.format(end_date))
+  report += 'Full report for the Term: {} \n\n'.format(period-1)
+  report += 'Start date: {}  \n\n'.format(start_date)
+  report += 'End date: {} \n\n'.format(end_date)
+  print('Start Time: {}\n'.format(start_time))
+  print('End Time: {}\n'.format(end_time))
+  print('Start Block: {}\n'.format(last_council['electedAtBlock']))
+  print('End Block: {}\n'.format(last_council['endedAtBlock']))
+  report += 'Start Block: {} \n\n'.format(last_council['electedAtBlock'])
+  report += 'End Block: {} \n\n'.format(last_council['endedAtBlock'])
+
+  print('# Opening')
+  num_openings, openings = get_new_opening(start_time, end_time)
+  print('Number of openings: {}'.format(num_openings))
+  report += '# Opening \n'
+  report += 'Number of openings: {} \n'.format(num_openings)
+  if num_openings > 0:
+    tble = print_table(openings)
+    report += tble+'\n'
+
+  print('# Hiring')
+  num_workers, hired_workers = get_new_hire(start_time, end_time)
+  print('Number of hired works: {}'.format(num_workers))
+  report += '# Hiring\n'
+  report += 'Number of hired works: {}\n'.format(num_workers)
+  if num_workers > 0:
+    tble = print_table(hired_workers)
+    report += tble+'\n'
+
+  print('# Terminated workers')
+  num_workers, terminated_workers = get_termination(start_time, end_time)
+  print('Number of terminated workers: {}'.format(num_workers))
+  report += '# Terminated workers \n'
+  report += 'Number of terminated workers: {} \n'.format(num_workers)
+  if num_workers > 0:
+    tble = print_table(terminated_workers)
+    report += tble+'\n'
+
+  print('# Slashed workers')
+  num_workers, slashed_workers = get_slashes(start_time, end_time)
+  print('Number of slashed workers: {}'.format(num_workers))
+  report += '# Slashed workers \n'
+  report += 'Number of slashed workers: {} \n'.format(num_workers)
+  if num_workers > 0:
+    tble = print_table(slashed_workers)
+    report += tble+'\n'
+
+  print('# Rewards')
+  report += '# Rewards\n'
+  total_rewards,rewards =  get_rewards(start_time, end_time)
+  print('Total Rewards: {}'.format(total_rewards))
+  report += 'Total Rewards: {}\n'.format(total_rewards)
+  tble = print_table(rewards)
+  report += tble+'\n'
+  
+  print('# BUCKETS Info  ')
+  report += '# BUCKETS Info  \n'
+  buckets = get_backets(url)
+  buckets_file = 'buckets_'+end_time
+  with open(buckets_file, 'w') as file:
+    json.dump(buckets, file)
+    file.close()
+  
+  tble = print_table(buckets)
+  report += tble+'\n'
+  
+
+  
+
+  print('## BUCKETS CREATED')
+  report += '## BUCKETS CREATED\n'
+  buckets_created = get_backets(url,start_time,end_time,createdat = True)
+  number_buckets_created = len(buckets_created)
+  print('Bucket Created: {}'.format(number_buckets_created))
+  report += 'Bucket Created: {}\n'.format(number_buckets_created)
+  if number_buckets_created > 0:
+    tble = print_table(buckets_created)
+    report += tble+'\n'
+
+  print('## BUCKETS DELETED')
+  report += '## BUCKETS DELETED\n'
+  buckets_deleted = get_backets(url,start_time,end_time,deletedat = True)
+  number_buckets_deleted = len(buckets_deleted)
+  print('Bucket Deleted: {}\n'.format(number_buckets_deleted))
+  report += 'Bucket Deleted: {}\n'.format(number_buckets_deleted)
+  if number_buckets_deleted > 0:
+    tble = print_table(buckets_deleted)
+    report += tble+'\n'
+
+  print('## Bags')
+  report += '## Bags\n'
+  bags = get_bags_nums(start_time, end_time)
+  print('Bags Created: {} \n'.format(bags['bag created']))
+  print('Bags Deleted: {} \n'.format(bags['bags deleted']))
+  report += 'Bags Created: {} \n\n'.format(bags['bag created'])
+  report += 'Bags Deleted: {} \n\n'.format(bags['bags deleted'])
+ 
+  print('# Objects Info during this Council Period')
+  report += '# Objects Info during this Council Period \n'
+  #print(get_objects(start_time,end_time))
+  objects_num, total_size,sizes,sizes_range,bags_stats = objects_stats(start_time,end_time)
+  print('Total Objects Size: {}\n'.format(objects_num))
+  report += 'Total Objects Size: {} \n\n'.format(objects_num)
+  print('Total Objects Size: {}\n'.format(total_size))
+  report += 'Total Objects Size: {} bytes \n\n'.format(total_size)
+  print('## Objects Size Distribution')
+  report += '## Objects Size Distribution\n'
+  tble = print_table([sizes])
+  report += tble+'\n \r\n'
+  print('\n')
+  tble = print_table([sizes_range])
+  report += tble+'\n'
+
+  print('## Objects Size Distribution Per Bag')
+  tble = print_table(bags_stats)
+  report += '## Objects Size Distribution Per Bag \n'
+  report += tble+'\n'
+
+  print('# Total object Info')
+  report += '# Total object Info \n'
+  #print(get_objects(start_time,end_time))
+  objects_num, total_size,sizes,sizes_range,bags_stats = objects_stats()
+  print('Total Objects: {}\n'.format(objects_num))
+  report += 'Total Objects: {} \n\n'.format(objects_num)
+  print('Total Objects Size: {}\n'.format(total_size))
+  report += 'Total Objects Size: {} bytes\n\n'.format(total_size)
+  total_num_bags = len(bags_stats)
+  print('Total Number of Bags in use: {}\n'.format(total_num_bags))
+  report += 'Total Number of Bags in use: {} bytes\n\n'.format(total_num_bags)
+  num, data = get_bags()
+  created_bags ,deleted_bags = get_created_deleted_bags(data)
+  num_created_bags = len(created_bags)
+  num_deleted_bags = len(deleted_bags)
+  total_num_bags = num_created_bags - num_deleted_bags
+  print('Grand Total Number of Bags: {}\n'.format(total_num_bags))
+  report += 'Grand Total Number of Bags: {} bytes\n\n'.format(total_num_bags)
+
+  print('## Objects Size Distribution')
+  report += '## Objects Size Distribution \n'
+  tble = print_table([sizes])
+  report += tble+'\n \r\n'
+  print('\n')
+
+  tble = print_table([sizes_range])
+  report += tble+'\n'
+  print('## Objects Size Distribution Per Bag')
+  report += '## Objects Size Distribution Per Bag \n'
+  tble = print_table(bags_stats, sort_key = 'total_size bytes')
+  report += tble+'\n\n\n'
+
+  image1_file = 'objects_size_{}'.format(end_date)
+  image2_file = 'objects_number_{}'.format(end_date)
+  get_draw_objects(image1_file, image2_file)
+  report += '![objects sizes](./{}.png) \n'.format(image1_file)
+  report += '![objects number](./{}.png)  \n'.format(image2_file)
+  
+  image3_file = 'bags_number_{}'.format(end_date)
+  get_draw_bags(image3_file)
+  report += '![objects sizes](./{}.png) \n'.format(image3_file)
+
+  #print('# Lost Objects - Server compare')
+  #report += '# Lost Objects - Server compare \n'
+  master_objects = get_objects(start_time,end_time)
+  #data = get_objects_files(file_server, operators, end_date, credential)
+  #operators = load_objects_from_server(data)
+  #operators_objects = []
+  #for operator in operators:
+  #  operators_objects = operators_objects + operator['objects']
+  #lost = compare_objects(operators_objects, master_objects)
+  total_objects = len(master_objects)
+  #lost_object = len(lost)
+  #print('Total Objects: {}\n'.format(total_objects))
+  #print('Total Lost Objects: {}\n'.format(lost_object))
+  #print('Percentage Lost Objects: %{}\n'.format(100*lost_object/total_objects))
+  #if lost_object > 0:
+  #  tble = print_table(lost, master_key = 'id')
+  #report += 'Total Objects: {} \n\n'.format(total_objects)
+  #report += 'Total Lost Objects: {} \n\n'.format(lost_object)
+  #report += 'Percentage Lost Objects: %{} \n\n'.format(100*lost_object/total_objects)
+  # report += tble+' \n'
+  print('# Lost Objects - GraphQl')
+  report += '# Lost Objects - GraphQl \n'
+  number_lost, lost = get_lost(start_time,end_time)
+  print('Total Objects: {}\n'.format(total_objects))
+  print('Total Lost Objects: {}\n'.format(number_lost))
+  print('Percentage Lost Objects: %{}\n'.format(100*number_lost/total_objects))
+  if number_lost > 0:
+    tble = print_table(lost, master_key = 'id')
+  report += 'Total Objects: {} \n\n'.format(total_objects)
+  report += 'Total Lost Objects: {} \n\n'.format(number_lost)
+  report += 'Percentage Lost Objects: %{} \n\n'.format(100*number_lost/total_objects)
+  report += tble+' \n'
+  file_name = 'report_'+end_time+'.md'
+  with open(file_name, 'w') as file:
+    file.write(report)
+    file.close()