Pārlūkot izejas kodu

Merge pull request #746 from mnaamani/update-storage-node

Update storage node
shamil-gadelshin 4 gadi atpakaļ
vecāks
revīzija
893543b391
59 mainītis faili ar 1979 papildinājumiem un 1666 dzēšanām
  1. 2 1
      package.json
  2. 2 0
      storage-node/.gitignore
  3. 38 7
      storage-node/README.md
  4. 0 18
      storage-node/license_header.txt
  5. 1 4
      storage-node/package.json
  6. 36 1
      storage-node/packages/cli/README.md
  7. 136 120
      storage-node/packages/cli/bin/cli.js
  8. 128 0
      storage-node/packages/cli/bin/dev.js
  9. 3 2
      storage-node/packages/cli/package.json
  10. 36 42
      storage-node/packages/colossus/README.md
  11. 166 260
      storage-node/packages/colossus/bin/cli.js
  12. 2 4
      storage-node/packages/colossus/lib/app.js
  13. 1 2
      storage-node/packages/colossus/lib/discovery.js
  14. 23 17
      storage-node/packages/colossus/lib/sync.js
  15. 6 6
      storage-node/packages/colossus/package.json
  16. 13 15
      storage-node/packages/colossus/paths/asset/v0/{id}.js
  17. 12 7
      storage-node/packages/colossus/paths/discover/v0/{id}.js
  18. 0 68
      storage-node/packages/discovery/IpfsResolver.js
  19. 0 28
      storage-node/packages/discovery/JdsResolver.js
  20. 11 21
      storage-node/packages/discovery/README.md
  21. 0 48
      storage-node/packages/discovery/Resolver.js
  22. 241 148
      storage-node/packages/discovery/discover.js
  23. 13 7
      storage-node/packages/discovery/example.js
  24. 4 3
      storage-node/packages/discovery/package.json
  25. 71 37
      storage-node/packages/discovery/publish.js
  26. 1 2
      storage-node/packages/helios/README.md
  27. 105 88
      storage-node/packages/helios/bin/cli.js
  28. 2 1
      storage-node/packages/helios/package.json
  29. 79 89
      storage-node/packages/runtime-api/assets.js
  30. 1 1
      storage-node/packages/runtime-api/balances.js
  31. 42 30
      storage-node/packages/runtime-api/discovery.js
  32. 117 116
      storage-node/packages/runtime-api/identities.js
  33. 128 118
      storage-node/packages/runtime-api/index.js
  34. 3 2
      storage-node/packages/runtime-api/package.json
  35. 0 186
      storage-node/packages/runtime-api/roles.js
  36. 1 2
      storage-node/packages/runtime-api/test/assets.js
  37. 1 4
      storage-node/packages/runtime-api/test/balances.js
  38. 1 8
      storage-node/packages/runtime-api/test/identities.js
  39. 1 1
      storage-node/packages/runtime-api/test/index.js
  40. 0 67
      storage-node/packages/runtime-api/test/roles.js
  41. 298 0
      storage-node/packages/runtime-api/workers.js
  42. 0 3
      storage-node/packages/storage/README.md
  43. 2 1
      storage-node/packages/storage/package.json
  44. 3 0
      storage-node/packages/storage/storage.js
  45. 48 55
      storage-node/packages/storage/test/storage.js
  46. 0 0
      storage-node/packages/storage/test/template/bar
  47. 0 0
      storage-node/packages/storage/test/template/foo/baz
  48. 0 1
      storage-node/packages/storage/test/template/quux
  49. 19 0
      storage-node/packages/util/externalPromise.js
  50. 2 1
      storage-node/packages/util/package.json
  51. 1 1
      storage-node/packages/util/test/fs/resolve.js
  52. 1 1
      storage-node/packages/util/test/fs/walk.js
  53. 1 1
      storage-node/packages/util/test/lru.js
  54. 1 1
      storage-node/packages/util/test/pagination.js
  55. 1 1
      storage-node/packages/util/test/ranges.js
  56. 10 6
      storage-node/scripts/compose/devchain-and-ipfs-node/docker-compose.yaml
  57. 39 0
      storage-node/scripts/run-dev-instance.sh
  58. 7 0
      storage-node/scripts/stop-dev-instance.sh
  59. 119 13
      yarn.lock

+ 2 - 1
package.json

@@ -1,6 +1,7 @@
 {
 	"private": true,
 	"name": "joystream",
+	"version": "1.0.0",
 	"license": "GPL-3.0-only",
 	"scripts": {
 		"test": "yarn && yarn workspaces run test",
@@ -15,7 +16,7 @@
 		"types",
 		"pioneer",
 		"pioneer/packages/*",
-		"storage-node/",
+		"storage-node",
 		"storage-node/packages/*"
 	],
 	"resolutions": {

+ 2 - 0
storage-node/.gitignore

@@ -25,3 +25,5 @@ node_modules/
 
 # Ignore nvm config file
 .nvmrc
+
+yarn.lock

+ 38 - 7
storage-node/README.md

@@ -4,11 +4,11 @@ This repository contains several Node packages, located under the `packages/`
 subdirectory. See each individual package for details:
 
 * [colossus](./packages/colossus/README.md) - the main colossus app.
-* [storage](./packages/storage/README.md) - abstraction over the storage backend.
-* [runtime-api](./packages/runtime-api/README.md) - convenience wrappers for the runtime API.
-* [crypto](./packages/crypto/README.md) - cryptographic utility functions.
-* [util](./packages/util/README.md) - general utility functions.
+* [storage-node-backend](./packages/storage/README.md) - abstraction over the storage backend.
+* [storage-runtime-api](./packages/runtime-api/README.md) - convenience wrappers for the runtime API.
+* [storage-utils](./packages/util/README.md) - general utility functions.
 * [discovery](./packages/discovery/README.md) - service discovery using IPNS.
+* [storage-cli](./packages/cli/README.md) - cli for uploading and downloading content from the network
 
 Installation
 ------------
@@ -40,17 +40,48 @@ $ yarn install
 The command will install dependencies, and make a `colossus` executable available:
 
 ```bash
-$ yarn run colossus --help
+$ yarn colossus --help
 ```
 
 *Testing*
 
-Running tests from the repository root will run tests from all packages:
+Run an ipfs node and a joystream-node development chain (in separate terminals)
 
+```sh
+ipfs daemon
 ```
-$ yarn run test
+
+```sh
+joystream-node --dev
+```
+
+```sh
+$ yarn workspace storage-node test
+```
+
+Running a development environment, after starting the ipfs node and development chain
+
+```sh
+yarn storage-cli dev-init
+```
+
+This will configure the running chain with alice as the storage lead and with a know role key for
+the storage provider.
+
+Run colossus in development mode:
+
+```sh
+yarn colossus --dev
+```
+
+Start pioneer ui:
+``sh
+yarn workspace pioneer start
 ```
 
+Browse pioneer on http://localhost:3000/
+You should find Alice account is the storage working group lead and is a storage provider
+Create a media channel. And upload a file.
 
 ## Detailed Setup and Configuration Guide
 For details on how to setup a storage node on the Joystream network, follow this [step by step guide](https://github.com/Joystream/helpdesk/tree/master/roles/storage-providers).

+ 0 - 18
storage-node/license_header.txt

@@ -1,18 +0,0 @@
-/*
- * This file is part of the storage node for the Joystream project.
- * Copyright (C) 2019 Joystream Contributors
- *
- * This program is free software: you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 3 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <https://www.gnu.org/licenses/>.
- */
-

+ 1 - 4
storage-node/package.json

@@ -1,6 +1,6 @@
 {
   "private": true,
-  "name": "@joystream/storage-node",
+  "name": "storage-node",
   "version": "1.0.0",
   "engines": {
     "node": ">=10.15.3",
@@ -30,9 +30,6 @@
     "darwin",
     "linux"
   ],
-  "workspaces": [
-    "packages/*"
-  ],
   "scripts": {
     "test": "wsrun --serial test",
     "lint": "wsrun --serial lint"

+ 36 - 1
storage-node/packages/cli/README.md

@@ -1,5 +1,40 @@
 # A CLI for the Joystream Runtime & Colossus
 
-- CLI access for some functionality from `@joystream/runtime-api`
+- CLI access for some functionality from other packages in the storage-node workspace
 - Colossus/storage node functionality:
   - File uploads
+  - File downloads
+- Development
+  - Setup development environment
+
+Running the storage cli tool:
+
+```sh
+$ yarn storage-cli --help
+```
+
+```sh
+
+  Joystream tool for uploading and downloading files to the network
+
+  Usage:
+    $ storage-cli command [arguments..] [key_file] [passphrase]
+
+  Some commands require a key file as the last option holding the identity for
+  interacting with the runtime API.
+
+  Commands:
+    upload            Upload a file to a Colossus storage node. Requires a
+                      storage node URL, and a local file name to upload. As
+                      an optional third parameter, you can provide a Data
+                      Object Type ID - this defaults to "1" if not provided.
+    download          Retrieve a file. Requires a storage node URL and a content
+                      ID, as well as an output filename.
+    head              Send a HEAD request for a file, and print headers.
+                      Requires a storage node URL and a content ID.
+
+  Dev Commands:       Commands to run on a development chain.
+    dev-init          Setup chain with Alice as lead and storage provider.
+    dev-check         Check the chain is setup with Alice as lead and storage provider.
+
+```

+ 136 - 120
storage-node/packages/cli/bin/cli.js

@@ -17,37 +17,28 @@
  * along with this program.  If not, see <https://www.gnu.org/licenses/>.
  */
 
-'use strict';
+'use strict'
 
-const path = require('path');
-const fs = require('fs');
-const assert = require('assert');
-
-const { RuntimeApi } = require('@joystream/runtime-api');
-
-const meow = require('meow');
-const chalk = require('chalk');
-const _ = require('lodash');
-
-const debug = require('debug')('joystream:cli');
-
-// Project root
-const project_root = path.resolve(__dirname, '..');
-
-// Configuration (default)
-const pkg = require(path.resolve(project_root, 'package.json'));
+const fs = require('fs')
+const assert = require('assert')
+const { RuntimeApi } = require('@joystream/storage-runtime-api')
+const meow = require('meow')
+const chalk = require('chalk')
+const _ = require('lodash')
+const debug = require('debug')('joystream:storage-cli')
+const dev = require('./dev')
 
 // Parse CLI
 const FLAG_DEFINITIONS = {
   // TODO
-};
+}
 
 const cli = meow(`
   Usage:
-    $ joystream key_file command [options]
+    $ storage-cli command [arguments..] [key_file] [passphrase]
 
-  All commands require a key file holding the identity for interacting with the
-  runtime API.
+  Some commands require a key file as the last option holding the identity for
+  interacting with the runtime API.
 
   Commands:
     upload            Upload a file to a Colossus storage node. Requires a
@@ -58,173 +49,198 @@ const cli = meow(`
                       ID, as well as an output filename.
     head              Send a HEAD request for a file, and print headers.
                       Requires a storage node URL and a content ID.
+
+  Dev Commands:       Commands to run on a development chain.
+    dev-init          Setup chain with Alice as lead and storage provider.
+    dev-check         Check the chain is setup with Alice as lead and storage provider.
   `,
-  { flags: FLAG_DEFINITIONS });
+  { flags: FLAG_DEFINITIONS })
 
-function assert_file(name, filename)
-{
-  assert(filename, `Need a ${name} parameter to proceed!`);
-  assert(fs.statSync(filename).isFile(), `Path "${filename}" is not a file, aborting!`);
+function assert_file (name, filename) {
+  assert(filename, `Need a ${name} parameter to proceed!`)
+  assert(fs.statSync(filename).isFile(), `Path "${filename}" is not a file, aborting!`)
+}
+
+function load_identity (api, filename, passphrase) {
+  if (filename) {
+    assert_file('keyfile', filename)
+    api.identities.loadUnlock(filename, passphrase)
+  } else {
+    debug('Loading Alice as identity')
+    api.identities.useKeyPair(dev.aliceKeyPair(api))
+  }
 }
 
 const commands = {
-  'upload': async (runtime_api, url, filename, do_type_id) => {
+  // add Alice well known account as storage provider
+  'dev-init': async (api) => {
+    // dev accounts are automatically loaded, no need to add explicitly to keyring
+    // load_identity(api)
+    let dev = require('./dev')
+    return dev.init(api)
+  },
+  // Checks that the setup done by dev-init command was successful.
+  'dev-check': async (api) => {
+    // dev accounts are automatically loaded, no need to add explicitly to keyring
+    // load_identity(api)
+    let dev = require('./dev')
+    return dev.check(api)
+  },
+  // The upload method is not correctly implemented
+  // needs to get the liaison after creating a data object,
+  // resolve the ipns id to the asset put api url of the storage-node
+  // before uploading..
+  'upload': async (api, url, filename, do_type_id, keyfile, passphrase) => {
+    load_identity(keyfile, passphrase)
     // Check parameters
-    assert_file('file', filename);
+    assert_file('file', filename)
 
-    const size = fs.statSync(filename).size;
-    console.log(`File "${filename}" is ` + chalk.green(size) + ' Bytes.');
+    const size = fs.statSync(filename).size
+    debug(`File "${filename}" is ${chalk.green(size)} Bytes.`)
 
     if (!do_type_id) {
-      do_type_id = 1;
+      do_type_id = 1
     }
-    console.log('Data Object Type ID is: ' + chalk.green(do_type_id));
+
+    debug('Data Object Type ID is: ' + chalk.green(do_type_id))
 
     // Generate content ID
     // FIXME this require path is like this because of
     // https://github.com/Joystream/apps/issues/207
-    const { ContentId } = require('@joystream/types/lib/media');
-    var cid = ContentId.generate();
-    cid = cid.encode().toString();
-    console.log('Generated content ID: ' + chalk.green(cid));
+    const { ContentId } = require('@joystream/types/media')
+    var cid = ContentId.generate()
+    cid = cid.encode().toString()
+    debug('Generated content ID: ' + chalk.green(cid))
 
     // Create Data Object
-    const data_object = await runtime_api.assets.createDataObject(
-      runtime_api.identities.key.address, cid, do_type_id, size);
-    console.log('Data object created.');
+    const data_object = await api.assets.createDataObject(
+      api.identities.key.address, cid, do_type_id, size)
+    debug('Data object created.')
 
     // TODO in future, optionally contact liaison here?
-    const request = require('request');
-    url = `${url}asset/v0/${cid}`;
-    console.log('Uploading to URL', chalk.green(url));
+    const request = require('request')
+    url = `${url}asset/v0/${cid}`
+    debug('Uploading to URL', chalk.green(url))
 
-    const f = fs.createReadStream(filename);
+    const f = fs.createReadStream(filename)
     const opts = {
       url: url,
       headers: {
         'content-type': '',
-        'content-length': `${size}`,
+        'content-length': `${size}`
       },
-      json: true,
-    };
+      json: true
+    }
     return new Promise((resolve, reject) => {
       const r = request.put(opts, (error, response, body) => {
         if (error) {
-          reject(error);
-          return;
+          reject(error)
+          return
         }
 
-        if (response.statusCode / 100 != 2) {
-          reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`));
-          return;
+        if (response.statusCode / 100 !== 2) {
+          reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`))
+          return
         }
-        console.log('Upload successful:', body.message);
-        resolve();
-      });
-      f.pipe(r);
-    });
+        debug('Upload successful:', body.message)
+        resolve()
+      })
+      f.pipe(r)
+    })
   },
-
-  'download': async (runtime_api, url, content_id, filename) => {
-    const request = require('request');
-    url = `${url}asset/v0/${content_id}`;
-    console.log('Downloading URL', chalk.green(url), 'to', chalk.green(filename));
-
-    const f = fs.createWriteStream(filename);
+  // needs to be updated to take a content id and resolve it a potential set
+  // of providers that has it, and select one (possibly try more than one provider)
+  // to fetch it from the get api url of a provider..
+  'download': async (api, url, content_id, filename) => {
+    const request = require('request')
+    url = `${url}asset/v0/${content_id}`
+    debug('Downloading URL', chalk.green(url), 'to', chalk.green(filename))
+
+    const f = fs.createWriteStream(filename)
     const opts = {
       url: url,
-      json: true,
-    };
+      json: true
+    }
     return new Promise((resolve, reject) => {
       const r = request.get(opts, (error, response, body) => {
         if (error) {
-          reject(error);
-          return;
+          reject(error)
+          return
         }
 
-        console.log('Downloading', chalk.green(response.headers['content-type']), 'of size', chalk.green(response.headers['content-length']), '...');
+        debug('Downloading', chalk.green(response.headers['content-type']), 'of size', chalk.green(response.headers['content-length']), '...')
 
         f.on('error', (err) => {
-          reject(err);
-        });
+          reject(err)
+        })
 
         f.on('finish', () => {
-          if (response.statusCode / 100 != 2) {
-            reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`));
-            return;
+          if (response.statusCode / 100 !== 2) {
+            reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`))
+            return
           }
-          console.log('Download completed.');
-          resolve();
-        });
-      });
-      r.pipe(f);
-    });
+          debug('Download completed.')
+          resolve()
+        })
+      })
+      r.pipe(f)
+    })
   },
-
-  'head': async (runtime_api, url, content_id) => {
-    const request = require('request');
-    url = `${url}asset/v0/${content_id}`;
-    console.log('Checking URL', chalk.green(url), '...');
+  // similar to 'download' function
+  'head': async (api, url, content_id) => {
+    const request = require('request')
+    url = `${url}asset/v0/${content_id}`
+    debug('Checking URL', chalk.green(url), '...')
 
     const opts = {
       url: url,
-      json: true,
-    };
+      json: true
+    }
     return new Promise((resolve, reject) => {
       const r = request.head(opts, (error, response, body) => {
         if (error) {
-          reject(error);
-          return;
+          reject(error)
+          return
         }
 
-        if (response.statusCode / 100 != 2) {
-          reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`));
-          return;
+        if (response.statusCode / 100 !== 2) {
+          reject(new Error(`${response.statusCode}: ${body.message || 'unknown reason'}`))
+          return
         }
 
         for (var propname in response.headers) {
-          console.log(`  ${chalk.yellow(propname)}: ${response.headers[propname]}`);
+          debug(`  ${chalk.yellow(propname)}: ${response.headers[propname]}`)
         }
 
-        resolve();
-      });
-    });
-  },
-
-};
-
-
-async function main()
-{
-  // Key file is at the first instance.
-  const key_file = cli.input[0];
-  assert_file('key file', key_file);
+        resolve()
+      })
+    })
+  }
+}
 
-  // Create runtime API.
-  const runtime_api = await RuntimeApi.create({ account_file: key_file });
+async function main () {
+  const api = await RuntimeApi.create()
 
   // Simple CLI commands
-  const command = cli.input[1];
+  const command = cli.input[0]
   if (!command) {
-    throw new Error('Need a command to run!');
+    throw new Error('Need a command to run!')
   }
 
   if (commands.hasOwnProperty(command)) {
     // Command recognized
-    const args = _.clone(cli.input).slice(2);
-    await commands[command](runtime_api, ...args);
-  }
-  else {
-    throw new Error(`Command "${command}" not recognized, aborting!`);
+    const args = _.clone(cli.input).slice(1)
+    await commands[command](api, ...args)
+  } else {
+    throw new Error(`Command "${command}" not recognized, aborting!`)
   }
 }
 
 main()
   .then(() => {
-    console.log('Process exiting gracefully.');
-    process.exit(0);
+    process.exit(0)
   })
   .catch((err) => {
-    console.error(chalk.red(err.stack));
-    process.exit(-1);
-  });
+    console.error(chalk.red(err.stack))
+    process.exit(-1)
+  })

+ 128 - 0
storage-node/packages/cli/bin/dev.js

@@ -0,0 +1,128 @@
+/* eslint-disable no-console */
+
+'use strict'
+
+const debug = require('debug')('joystream:storage-cli:dev')
+const assert = require('assert')
+
+// Derivation path appended to well known development seed used on
+// development chains
+const ALICE_URI = '//Alice'
+const ROLE_ACCOUNT_URI = '//Colossus'
+
+function aliceKeyPair (api) {
+  return api.identities.keyring.addFromUri(ALICE_URI, null, 'sr25519')
+}
+
+function roleKeyPair (api) {
+  return api.identities.keyring.addFromUri(ROLE_ACCOUNT_URI, null, 'sr25519')
+}
+
+function developmentPort () {
+  return 3001
+}
+
+const check = async (api) => {
+  const roleAccountId = roleKeyPair(api).address
+  const providerId = await api.workers.findProviderIdByRoleAccount(roleAccountId)
+
+  if (providerId === null) {
+    throw new Error('Dev storage provider not found on chain!')
+  }
+
+  console.log(`
+  Chain is setup with Dev storage provider:
+    providerId = ${providerId}
+    roleAccountId = ${roleAccountId}
+    roleKey = ${ROLE_ACCOUNT_URI}
+  `)
+
+  return providerId
+}
+
+// Setup Alice account on a developement chain as
+// a member, storage lead, and a storage provider using a deterministic
+// development key for the role account
+const init = async (api) => {
+  try {
+    await check(api)
+    return
+  } catch (err) {
+    // We didn't find a storage provider with expected role account
+  }
+
+  const alice = aliceKeyPair(api).address
+  const roleAccount = roleKeyPair(api).address
+
+  debug(`Ensuring Alice is sudo`)
+
+  // make sure alice is sudo - indirectly checking this is a dev chain
+  const sudo = await api.identities.getSudoAccount()
+
+  if (!sudo.eq(alice)) {
+    throw new Error('Setup requires Alice to be sudo. Are you sure you are running a devchain?')
+  }
+
+  console.log('Running setup')
+
+  // set localhost colossus as discovery provider
+  // assuming pioneer dev server is running on port 3000 we should run
+  // the storage dev server on a different port than the default for colossus which is also
+  // 3000
+  debug('Setting Local development node as bootstrap endpoint')
+  await api.discovery.setBootstrapEndpoints(alice, [`http://localhost:${developmentPort()}/`])
+
+  debug('Transferring tokens to storage role account')
+  // Give role account some tokens to work with
+  api.balances.transfer(alice, roleAccount, 100000)
+
+  debug('Ensuring Alice is as member..')
+  let aliceMemberId = await api.identities.firstMemberIdOf(alice)
+
+  if (aliceMemberId === undefined) {
+    debug('Registering Alice as member..')
+    aliceMemberId = await api.identities.registerMember(alice, {
+      handle: 'alice'
+    })
+  } else {
+    debug('Alice is already a member')
+  }
+
+  // Make alice the storage lead
+  debug('Making Alice the storage Lead')
+  const leadOpeningId = await api.workers.dev_addStorageLeadOpening()
+  const leadApplicationId = await api.workers.dev_applyOnOpening(leadOpeningId, aliceMemberId, alice, alice)
+  api.workers.dev_beginLeadOpeningReview(leadOpeningId)
+  await api.workers.dev_fillLeadOpening(leadOpeningId, leadApplicationId)
+
+  const leadAccount = await api.workers.getLeadRoleAccount()
+  if (!leadAccount.eq(alice)) {
+    throw new Error('Setting alice as lead failed')
+  }
+
+  // Create a storage openinging, apply, start review, and fill opening
+  debug(`Making ${ROLE_ACCOUNT_URI} account a storage provider`)
+
+  const openingId = await api.workers.dev_addStorageOpening()
+  debug(`created new storage opening: ${openingId}`)
+
+  const applicationId = await api.workers.dev_applyOnOpening(openingId, aliceMemberId, alice, roleAccount)
+  debug(`applied with application id: ${applicationId}`)
+
+  api.workers.dev_beginStorageOpeningReview(openingId)
+
+  debug(`filling storage opening`)
+  const providerId = await api.workers.dev_fillStorageOpening(openingId, applicationId)
+
+  debug(`Assigned storage provider id: ${providerId}`)
+
+  return check(api)
+}
+
+module.exports = {
+  init,
+  check,
+  aliceKeyPair,
+  roleKeyPair,
+  developmentPort
+}

+ 3 - 2
storage-node/packages/cli/package.json

@@ -1,5 +1,6 @@
 {
   "name": "@joystream/storage-cli",
+  "private": true,
   "version": "0.1.0",
   "description": "Joystream tool for uploading and downloading files to the network",
   "author": "Joystream",
@@ -30,7 +31,7 @@
     "lint": "eslint 'paths/**/*.js' 'lib/**/*.js'"
   },
   "bin": {
-    "joystream": "bin/cli.js"
+    "storage-cli": "bin/cli.js"
   },
   "devDependencies": {
     "chai": "^4.2.0",
@@ -39,7 +40,7 @@
     "temp": "^0.9.0"
   },
   "dependencies": {
-    "@joystream/runtime-api": "^0.1.0",
+    "@joystream/storage-runtime-api": "^0.1.0",
     "chalk": "^2.4.2",
     "lodash": "^4.17.11",
     "meow": "^5.0.0",

+ 36 - 42
storage-node/packages/colossus/README.md

@@ -3,59 +3,53 @@
 Development
 -----------
 
-Run a development server:
+Run a development server (an ipfs node and development chain should be running on the local machine)
 
 ```bash
-$ yarn run dev --config myconfig.json
+$ yarn colossus --dev
 ```
 
-Command-Line
-------------
+This will expect the chain to be configured with certain development accounts.
+The setup can be done by running the dev-init command for the storage-cli:
 
-Running a storage server is (almost) as easy as running the bundled `colossus`
-executable:
-
-```bash
-$ colossus --storage=/path/to/storage/directory
+```sh
+yarn storage-cli dev-init
 ```
 
-Run with `--help` to see a list of available CLI options.
-
-You need to stake as a storage provider to run a storage node.
-
-Configuration
--------------
-
-Most common configuration options are available as command-line options
-for the CLI.
 
-However, some advanced configuration options are only possible to set
-via the configuration file.
-
-* `filter` is a hash of upload filtering options.
-  * `max_size` sets the maximum permissible file upload size. If unset,
-    this defaults to 100 MiB.
-  * `mime` is a hash of...
-    * `accept` is an Array of mime types that are acceptable for uploads,
-      such as `text/plain`, etc. Mime types can also be specified for
-      wildcard matching, such as `video/*`.
-    * `reject` is an Array of mime types that are unacceptable for uploads.
-
-Upload Filtering
-----------------
+Command-Line
+------------
 
-The upload filtering logic first tests whether any of the `accept` mime types
-are matched. If none are matched, the upload is rejected. If any is matched,
-then the upload is still rejected if any of the `reject` mime types are
-matched.
+```sh
+$ yarn colossus --help
+```
 
-This allows inclusive and exclusive filtering.
+```
+  Colossus - Joystream Storage Node
+
+  Usage:
+    $ colossus [command] [arguments]
+
+  Commands:
+    server        Runs a production server instance. (discovery and storage services)
+                  This is the default command if not specified.
+    discovery     Run the discovery service only.
+
+  Arguments (required for server. Ignored if running server with --dev option):
+    --provider-id ID, -i ID     StorageProviderId assigned to you in working group.
+    --key-file FILE             JSON key export file to use as the storage provider (role account).
+    --public-url=URL, -u URL    API Public URL to announce.
+
+  Arguments (optional):
+    --dev                   Runs server with developer settings.
+    --passphrase            Optional passphrase to use to decrypt the key-file.
+    --port=PORT, -p PORT    Port number to listen on, defaults to 3000.
+    --ws-provider WS_URL    Joystream-node websocket provider, defaults to ws://localhost:9944
+```
 
-* `{ accept: ['text/plain', 'text/html'] }` accepts *only* the two given mime types.
-* `{ accept: ['text/*'], reject: ['text/plain'] }` accepts any `text/*` that is not
-  `text/plain`.
+To run a storage server in production you will need to enroll on the network first to
+obtain your provider-id and role account.
 
-More advanced filtering is currently not available.
 
 API Packages
 ------------
@@ -78,7 +72,7 @@ For reusability across API versions, it's best to keep files in the `paths`
 subfolder very thin, and instead inject implementations via the `dependencies`
 configuration value of `express-openapi`.
 
-These implementations line to the `./lib` subfolder. Adjust `server.js` as
+These implementations line to the `./lib` subfolder. Adjust `app.js` as
 needed to make them available to API packages.
 
 Streaming Notes

+ 166 - 260
storage-node/packages/colossus/bin/cli.js

@@ -1,299 +1,226 @@
 #!/usr/bin/env node
-'use strict';
+/* es-lint disable*/
+
+'use strict'
 
 // Node requires
-const path = require('path');
+const path = require('path')
 
 // npm requires
-const meow = require('meow');
-const configstore = require('configstore');
-const chalk = require('chalk');
-const figlet = require('figlet');
-const _ = require('lodash');
+const meow = require('meow')
+const chalk = require('chalk')
+const figlet = require('figlet')
+const _ = require('lodash')
 
-const debug = require('debug')('joystream:cli');
+const debug = require('debug')('joystream:colossus')
 
 // Project root
-const PROJECT_ROOT = path.resolve(__dirname, '..');
+const PROJECT_ROOT = path.resolve(__dirname, '..')
 
-// Configuration (default)
-const pkg = require(path.resolve(PROJECT_ROOT, 'package.json'));
-const default_config = new configstore(pkg.name);
+// Number of milliseconds to wait between synchronization runs.
+const SYNC_PERIOD_MS = 300000 // 5min
 
 // Parse CLI
 const FLAG_DEFINITIONS = {
   port: {
-    type: 'integer',
+    type: 'number',
     alias: 'p',
-    _default: 3000,
-  },
-  'syncPeriod': {
-    type: 'integer',
-    _default: 120000,
+    default: 3000
   },
   keyFile: {
     type: 'string',
+    isRequired: (flags, input) => {
+      return !flags.dev
+    }
   },
-  config: {
-    type: 'string',
-    alias: 'c',
-  },
-  'publicUrl': {
+  publicUrl: {
     type: 'string',
-    alias: 'u'
+    alias: 'u',
+    isRequired: (flags, input) => {
+      return !flags.dev
+    }
   },
-  'passphrase': {
+  passphrase: {
     type: 'string'
   },
-  'wsProvider': {
+  wsProvider: {
     type: 'string',
-    _default: 'ws://localhost:9944'
+    default: 'ws://localhost:9944'
+  },
+  providerId: {
+    type: 'number',
+    alias: 'i',
+    isRequired: (flags, input) => {
+      return !flags.dev
+    }
   }
-};
+}
 
 const cli = meow(`
   Usage:
-    $ colossus [command] [options]
+    $ colossus [command] [arguments]
 
   Commands:
-    server [default]  Run a server instance with the given configuration.
-    signup            Sign up as a storage provider. Requires that you provide
-                      a JSON account file of an account that is a member, and has
-                      sufficient balance for staking as a storage provider.
-                      Writes a new account file that should be used to run the
-                      storage node.
-    down              Signal to network that all services are down. Running
-                      the server will signal that services as online again.
-    discovery         Run the discovery service only.
-
-  Options:
-    --config=PATH, -c PATH  Configuration file path. Defaults to
-                            "${default_config.path}".
+    server        Runs a production server instance. (discovery and storage services)
+                  This is the default command if not specified.
+    discovery     Run the discovery service only.
+
+  Arguments (required for server. Ignored if running server with --dev option):
+    --provider-id ID, -i ID     StorageProviderId assigned to you in working group.
+    --key-file FILE             JSON key export file to use as the storage provider (role account).
+    --public-url=URL, -u URL    API Public URL to announce.
+
+  Arguments (optional):
+    --dev                   Runs server with developer settings.
+    --passphrase            Optional passphrase to use to decrypt the key-file.
     --port=PORT, -p PORT    Port number to listen on, defaults to 3000.
-    --sync-period           Number of milliseconds to wait between synchronization
-                            runs. Defaults to 30,000 (30s).
-    --key-file              JSON key export file to use as the storage provider.
-    --passphrase            Optional passphrase to use to decrypt the key-file (if its encrypted).
-    --public-url            API Public URL to announce. No URL will be announced if not specified.
-    --ws-provider           Joystream Node websocket provider url, eg: "ws://127.0.0.1:9944"
+    --ws-provider WS_URL    Joystream-node websocket provider, defaults to ws://localhost:9944
   `,
-  { flags: FLAG_DEFINITIONS });
-
-// Create configuration
-function create_config(pkgname, flags)
-{
-  // Create defaults from flag definitions
-  const defaults = {};
-  for (var key in FLAG_DEFINITIONS) {
-    const defs = FLAG_DEFINITIONS[key];
-    if (defs._default) {
-      defaults[key] = defs._default;
-    }
-  }
-
-  // Provide flags as defaults. Anything stored in the config overrides.
-  var config = new configstore(pkgname, defaults, { configPath: flags.config });
-
-  // But we want the flags to also override what's stored in the config, so
-  // set them all.
-  for (var key in flags) {
-    // Skip aliases and self-referential config flag
-    if (key.length == 1 || key === 'config') continue;
-    // Skip sensitive flags
-    if (key == 'passphrase') continue;
-    // Skip unset flags
-    if (!flags[key]) continue;
-    // Otherwise set.
-    config.set(key, flags[key]);
-  }
-
-  debug('Configuration at', config.path, config.all);
-  return config;
-}
+  { flags: FLAG_DEFINITIONS })
 
 // All-important banner!
-function banner()
-{
-  console.log(chalk.blue(figlet.textSync('joystream', 'Speed')));
+function banner () {
+  console.log(chalk.blue(figlet.textSync('joystream', 'Speed')))
 }
 
 function start_express_app(app, port) {
-  const http = require('http');
-  const server = http.createServer(app);
+  const http = require('http')
+  const server = http.createServer(app)
 
   return new Promise((resolve, reject) => {
-    server.on('error', reject);
+    server.on('error', reject)
     server.on('close', (...args) => {
-      console.log('Server closed, shutting down...');
-      resolve(...args);
-    });
+      console.log('Server closed, shutting down...')
+      resolve(...args)
+    })
     server.on('listening', () => {
-      console.log('API server started.', server.address());
-    });
-    server.listen(port, '::');
-    console.log('Starting API server...');
-  });
+      console.log('API server started.', server.address())
+    })
+    server.listen(port, '::')
+    console.log('Starting API server...')
+  })
 }
+
 // Start app
-function start_all_services(store, api, config)
-{
-  const app = require('../lib/app')(PROJECT_ROOT, store, api, config);
-  const port = config.get('port');
-  return start_express_app(app, port);
+function start_all_services ({ store, api, port }) {
+  const app = require('../lib/app')(PROJECT_ROOT, store, api) // reduce falgs to only needed values
+  return start_express_app(app, port)
 }
 
-// Start discovery service app
-function start_discovery_service(api, config)
-{
-  const app = require('../lib/discovery')(PROJECT_ROOT, api, config);
-  const port = config.get('port');
-  return start_express_app(app, port);
+// Start discovery service app only
+function start_discovery_service ({ api, port }) {
+  const app = require('../lib/discovery')(PROJECT_ROOT, api) // reduce flags to only needed values
+  return start_express_app(app, port)
 }
 
 // Get an initialized storage instance
-function get_storage(runtime_api, config)
-{
+function get_storage (runtime_api) {
   // TODO at some point, we can figure out what backend-specific connection
   // options make sense. For now, just don't use any configuration.
-  const { Storage } = require('@joystream/storage');
+  const { Storage } = require('@joystream/storage-node-backend')
 
   const options = {
     resolve_content_id: async (content_id) => {
       // Resolve via API
-      const obj = await runtime_api.assets.getDataObject(content_id);
+      const obj = await runtime_api.assets.getDataObject(content_id)
       if (!obj || obj.isNone) {
-        return;
+        return
       }
+      // if obj.liaison_judgement !== Accepted .. throw ?
+      return obj.unwrap().ipfs_content_id.toString()
+    }
+  }
 
-      return obj.unwrap().ipfs_content_id.toString();
-    },
-  };
-
-  return Storage.create(options);
+  return Storage.create(options)
 }
 
-async function run_signup(account_file, provider_url)
-{
-  if (!account_file) {
-    console.log('Cannot proceed without keyfile');
-    return
-  }
+async function init_api_production ({ wsProvider, providerId, keyFile, passphrase }) {
+  // Load key information
+  const { RuntimeApi } = require('@joystream/storage-runtime-api')
 
-  const { RuntimeApi } = require('@joystream/runtime-api');
-  const api = await RuntimeApi.create({account_file, canPromptForPassphrase: true, provider_url});
+  if (!keyFile) {
+    throw new Error('Must specify a --key-file argument for running a storage node.')
+  }
 
-  if (!api.identities.key) {
-    console.log('Cannot proceed without a member account');
-    return
+  if (providerId === undefined) {
+    throw new Error('Must specify a --provider-id argument for running a storage node')
   }
 
-  // Check there is an opening
-  let availableSlots = await api.roles.availableSlotsForRole(api.roles.ROLE_STORAGE);
+  const api = await RuntimeApi.create({
+    account_file: keyFile,
+    passphrase,
+    provider_url: wsProvider,
+    storageProviderId: providerId
+  })
 
-  if (availableSlots == 0) {
-    console.log(`
-      There are no open storage provider slots available at this time.
-      Please try again later.
-    `);
-    return;
-  } else {
-    console.log(`There are still ${availableSlots} slots available, proceeding`);
+  if (!api.identities.key) {
+    throw new Error('Failed to unlock storage provider account')
   }
 
-  const member_address = api.identities.key.address;
-
-  // Check if account works
-  const min = await api.roles.requiredBalanceForRoleStaking(api.roles.ROLE_STORAGE);
-  console.log(`Account needs to be a member and have a minimum balance of ${min.toString()}`);
-  const check = await api.roles.checkAccountForStaking(member_address);
-  if (check) {
-    console.log('Account is working for staking, proceeding.');
+  if (!await api.workers.isRoleAccountOfStorageProvider(api.storageProviderId, api.identities.key.address)) {
+    throw new Error('storage provider role account and storageProviderId are not associated with a worker')
   }
 
-  // Create a role key
-  const role_key = await api.identities.createRoleKey(member_address);
-  const role_address = role_key.address;
-  console.log('Generated', role_address, '- this is going to be exported to a JSON file.\n',
-    ' You can provide an empty passphrase to make starting the server easier,\n',
-    ' but you must keep the file very safe, then.');
-  const filename = await api.identities.writeKeyPairExport(role_address);
-  console.log('Identity stored in', filename);
-
-  // Ok, transfer for staking.
-  await api.roles.transferForStaking(member_address, role_address, api.roles.ROLE_STORAGE);
-  console.log('Funds transferred.');
-
-  // Now apply for the role
-  await api.roles.applyForRole(role_address, api.roles.ROLE_STORAGE, member_address);
-  console.log('Role application sent.\nNow visit Roles > My Requests in the app.');
+  return api
 }
 
-async function wait_for_role(config)
-{
+async function init_api_development () {
   // Load key information
-  const { RuntimeApi } = require('@joystream/runtime-api');
-  const keyFile = config.get('keyFile');
-  if (!keyFile) {
-    throw new Error("Must specify a key file for running a storage node! Sign up for the role; see `colussus --help' for details.");
-  }
-  const wsProvider = config.get('wsProvider');
+  const { RuntimeApi } = require('@joystream/storage-runtime-api')
+
+  const wsProvider = 'ws://localhost:9944'
 
   const api = await RuntimeApi.create({
-    account_file: keyFile,
-    passphrase: cli.flags.passphrase,
-    provider_url: wsProvider,
-  });
+    provider_url: wsProvider
+  })
 
-  if (!api.identities.key) {
-    throw new Error('Failed to unlock storage provider account');
-  }
+  const dev = require('../../cli/bin/dev')
+
+  api.identities.useKeyPair(dev.roleKeyPair(api))
 
-  // Wait for the account role to be finalized
-  console.log('Waiting for the account to be staked as a storage provider role...');
-  const result = await api.roles.waitForRole(api.identities.key.address, api.roles.ROLE_STORAGE);
-  return [result, api];
+  api.storageProviderId = await dev.check(api)
+
+  return api
 }
 
-function get_service_information(config) {
+function get_service_information (publicUrl) {
   // For now assume we run all services on the same endpoint
   return({
     asset: {
       version: 1, // spec version
-      endpoint: config.get('publicUrl')
+      endpoint: publicUrl
     },
     discover: {
       version: 1, // spec version
-      endpoint: config.get('publicUrl')
+      endpoint: publicUrl
     }
   })
 }
 
-async function announce_public_url(api, config) {
+async function announce_public_url (api, publicUrl) {
   // re-announce in future
   const reannounce = function (timeoutMs) {
-    setTimeout(announce_public_url, timeoutMs, api, config);
+    setTimeout(announce_public_url, timeoutMs, api, publicUrl)
   }
 
   debug('announcing public url')
-  const { publish } = require('@joystream/discovery')
-
-  const accountId = api.identities.key.address
+  const { publish } = require('@joystream/service-discovery')
 
   try {
-    const serviceInformation = get_service_information(config)
+    const serviceInformation = get_service_information(publicUrl)
 
-    let keyId = await publish.publish(serviceInformation);
+    let keyId = await publish.publish(serviceInformation)
 
-    const expiresInBlocks = 600; // ~ 1 hour (6s block interval)
-    await api.discovery.setAccountInfo(accountId, keyId, expiresInBlocks);
+    await api.discovery.setAccountInfo(keyId)
 
     debug('publishing complete, scheduling next update')
 
 // >> sometimes after tx is finalized.. we are not reaching here!
 
-    // Reannounce before expiery
-    reannounce(50 * 60 * 1000); // in 50 minutes
-
+    // Reannounce before expiery. Here we are concerned primarily
+    // with keeping the account information refreshed and 'available' in
+    // the ipfs network. our record on chain is valid for 24hr
+    reannounce(50 * 60 * 1000) // in 50 minutes
   } catch (err) {
     debug(`announcing public url failed: ${err.stack}`)
 
@@ -303,95 +230,74 @@ async function announce_public_url(api, config) {
   }
 }
 
-function go_offline(api) {
-  return api.discovery.unsetAccountInfo(api.identities.key.address)
+function go_offline (api) {
+  return api.discovery.unsetAccountInfo()
 }
 
 // Simple CLI commands
-var command = cli.input[0];
+var command = cli.input[0]
 if (!command) {
-  command = 'server';
+  command = 'server'
+}
+
+async function start_colossus ({ api, publicUrl, port, flags }) {
+  // TODO: check valid url, and valid port number
+  const store = get_storage(api)
+  banner()
+  const { start_syncing } = require('../lib/sync')
+  start_syncing(api, { syncPeriod: SYNC_PERIOD_MS }, store)
+  announce_public_url(api, publicUrl)
+  return start_all_services({ store, api, port, flags }) // dont pass all flags only required values
 }
 
 const commands = {
   'server': async () => {
-    const cfg = create_config(pkg.name, cli.flags);
-
-    // Load key information
-    const values = await wait_for_role(cfg);
-    const result = values[0]
-    const api = values[1];
-    if (!result) {
-      throw new Error(`Not staked as storage role.`);
-    }
-    console.log('Staked, proceeding.');
-
-    // Make sure a public URL is configured
-    if (!cfg.get('publicUrl')) {
-      throw new Error('publicUrl not configured')
+    let publicUrl, port, api
+
+    if (cli.flags.dev) {
+      const dev = require('../../cli/bin/dev')
+      api = await init_api_development()
+      port = dev.developmentPort()
+      publicUrl = `http://localhost:${port}/`
+    } else {
+      api = await init_api_production(cli.flags)
+      publicUrl = cli.flags.publicUrl
+      port = cli.flags.port
     }
 
-    // Continue with server setup
-    const store = get_storage(api, cfg);
-    banner();
-
-    const { start_syncing } = require('../lib/sync');
-    start_syncing(api, cfg, store);
-
-    announce_public_url(api, cfg);
-    await start_all_services(store, api, cfg);
-  },
-  'signup': async (account_file) => {
-    const cfg = create_config(pkg.name, cli.flags);
-    await run_signup(account_file, cfg.get('wsProvider'));
-  },
-  'down': async () => {
-    const cfg = create_config(pkg.name, cli.flags);
-
-    const values = await wait_for_role(cfg);
-    const result = values[0]
-    const api = values[1];
-    if (!result) {
-      throw new Error(`Not staked as storage role.`);
-    }
-
-    await go_offline(api)
+    return start_colossus({ api, publicUrl, port })
   },
   'discovery': async () => {
-    debug("Starting Joystream Discovery Service")
-    const { RuntimeApi } = require('@joystream/runtime-api')
-    const cfg = create_config(pkg.name, cli.flags)
-    const wsProvider = cfg.get('wsProvider');
-    const api = await RuntimeApi.create({ provider_url: wsProvider });
-    await start_discovery_service(api, cfg)
+    debug('Starting Joystream Discovery Service')
+    const { RuntimeApi } = require('@joystream/storage-runtime-api')
+    const wsProvider = cli.flags.wsProvider
+    const api = await RuntimeApi.create({ provider_url: wsProvider })
+    const port = cli.flags.port
+    await start_discovery_service({ api, port })
   }
-};
-
+}
 
-async function main()
-{
+async function main () {
   // Simple CLI commands
-  var command = cli.input[0];
+  var command = cli.input[0]
   if (!command) {
-    command = 'server';
+    command = 'server'
   }
 
   if (commands.hasOwnProperty(command)) {
     // Command recognized
-    const args = _.clone(cli.input).slice(1);
-    await commands[command](...args);
-  }
-  else {
-    throw new Error(`Command "${command}" not recognized, aborting!`);
+    const args = _.clone(cli.input).slice(1)
+    await commands[command](...args)
+  } else {
+    throw new Error(`Command '${command}' not recognized, aborting!`)
   }
 }
 
 main()
   .then(() => {
-    console.log('Process exiting gracefully.');
-    process.exit(0);
+    process.exit(0)
   })
   .catch((err) => {
-    console.error(chalk.red(err.stack));
-    process.exit(-1);
-  });
+    console.error(chalk.red(err.stack))
+    process.exit(-1)
+  })

+ 2 - 4
storage-node/packages/colossus/lib/app.js

@@ -32,11 +32,10 @@ const yaml = require('js-yaml');
 // Project requires
 const validateResponses = require('./middleware/validate_responses');
 const fileUploads = require('./middleware/file_uploads');
-const pagination = require('@joystream/util/pagination');
-const storage = require('@joystream/storage');
+const pagination = require('@joystream/storage-utils/pagination');
 
 // Configure app
-function create_app(project_root, storage, runtime, config)
+function create_app(project_root, storage, runtime)
 {
   const app = express();
   app.use(cors());
@@ -60,7 +59,6 @@ function create_app(project_root, storage, runtime, config)
       'multipart/form-data': fileUploads
     },
     dependencies: {
-      config: config,
       storage: storage,
       runtime: runtime,
     },

+ 1 - 2
storage-node/packages/colossus/lib/discovery.js

@@ -33,7 +33,7 @@ const path = require('path');
 const validateResponses = require('./middleware/validate_responses');
 
 // Configure app
-function create_app(project_root, runtime, config)
+function create_app(project_root, runtime)
 {
   const app = express();
   app.use(cors());
@@ -56,7 +56,6 @@ function create_app(project_root, runtime, config)
     },
     docsPath: '/swagger.json',
     dependencies: {
-      config: config,
       runtime: runtime,
     },
   });

+ 23 - 17
storage-node/packages/colossus/lib/sync.js

@@ -20,20 +20,22 @@
 
 const debug = require('debug')('joystream:sync');
 
-async function sync_callback(api, config, storage)
-{
-  debug('Starting sync run...');
-
+async function sync_callback(api, storage) {
   // The first step is to gather all data objects from chain.
   // TODO: in future, limit to a configured tranche
   // FIXME this isn't actually on chain yet, so we'll fake it.
   const knownContentIds = await api.assets.getKnownContentIds() || [];
 
-  const role_addr = api.identities.key.address;
+  const role_addr = api.identities.key.address
+  const providerId = api.storageProviderId
 
   // Iterate over all sync objects, and ensure they're synced.
   const allChecks = knownContentIds.map(async (content_id) => {
-    let { relationship, relationshipId } = await api.assets.getStorageRelationshipAndId(role_addr, content_id);
+    let { relationship, relationshipId } = await api.assets.getStorageRelationshipAndId(providerId, content_id);
+
+    // get the data object
+    // make sure the data object was Accepted by the liaison,
+    // don't just blindly attempt to fetch them
 
     let fileLocal;
     try {
@@ -51,8 +53,11 @@ async function sync_callback(api, config, storage)
       try {
         await storage.synchronize(content_id);
       } catch (err) {
-        debug(err.message)
+        // duplicate logging
+        // debug(err.message)
+        return
       }
+      // why are we returning, if we synced the file
       return;
     }
 
@@ -60,8 +65,8 @@ async function sync_callback(api, config, storage)
       // create relationship
       debug(`Creating new storage relationship for ${content_id.encode()}`);
       try {
-        relationshipId = await api.assets.createAndReturnStorageRelationship(role_addr, content_id);
-        await api.assets.toggleStorageRelationshipReady(role_addr, relationshipId, true);
+        relationshipId = await api.assets.createAndReturnStorageRelationship(role_addr, providerId, content_id);
+        await api.assets.toggleStorageRelationshipReady(role_addr, providerId, relationshipId, true);
       } catch (err) {
         debug(`Error creating new storage relationship ${content_id.encode()}: ${err.stack}`);
         return;
@@ -70,7 +75,7 @@ async function sync_callback(api, config, storage)
       debug(`Updating storage relationship to ready for ${content_id.encode()}`);
       // update to ready. (Why would there be a relationship set to ready: false?)
       try {
-        await api.assets.toggleStorageRelationshipReady(role_addr, relationshipId, true);
+        await api.assets.toggleStorageRelationshipReady(role_addr, providerId, relationshipId, true);
       } catch(err) {
         debug(`Error setting relationship ready ${content_id.encode()}: ${err.stack}`);
       }
@@ -81,26 +86,27 @@ async function sync_callback(api, config, storage)
   });
 
 
-  await Promise.all(allChecks);
-  debug('sync run complete');
+  return Promise.all(allChecks);
 }
 
 
-async function sync_periodic(api, config, storage)
+async function sync_periodic(api, flags, storage)
 {
   try {
-    await sync_callback(api, config, storage);
+    debug('Starting sync run...')
+    await sync_callback(api, storage)
+    debug('sync run complete')
   } catch (err) {
     debug(`Error in sync_periodic ${err.stack}`);
   }
   // always try again
-  setTimeout(sync_periodic, config.get('syncPeriod'), api, config, storage);
+  setTimeout(sync_periodic, flags.syncPeriod, api, flags, storage);
 }
 
 
-function start_syncing(api, config, storage)
+function start_syncing(api, flags, storage)
 {
-  sync_periodic(api, config, storage);
+  sync_periodic(api, flags, storage);
 }
 
 module.exports = {

+ 6 - 6
storage-node/packages/colossus/package.json

@@ -1,6 +1,7 @@
 {
   "name": "@joystream/colossus",
-  "version": "0.1.0",
+  "private": true,
+  "version": "0.2.0",
   "description": "Colossus - Joystream Storage Node",
   "author": "Joystream",
   "homepage": "https://github.com/Joystream/joystream",
@@ -49,18 +50,17 @@
     "temp": "^0.9.0"
   },
   "dependencies": {
-    "@joystream/runtime-api": "^0.1.0",
-    "@joystream/storage": "^0.1.0",
-    "@joystream/util": "^0.1.0",
+    "@joystream/storage-runtime-api": "^0.1.0",
+    "@joystream/storage-node-backend": "^0.1.0",
+    "@joystream/storage-utils": "^0.1.0",
     "body-parser": "^1.19.0",
     "chalk": "^2.4.2",
-    "configstore": "^4.0.0",
     "cors": "^2.8.5",
     "express-openapi": "^4.6.1",
     "figlet": "^1.2.1",
     "js-yaml": "^3.13.1",
     "lodash": "^4.17.11",
-    "meow": "^5.0.0",
+    "meow": "^7.0.1",
     "multer": "^1.4.1",
     "si-prefix": "^0.2.0"
   }

+ 13 - 15
storage-node/packages/colossus/paths/asset/v0/{id}.js

@@ -20,13 +20,10 @@
 
 const path = require('path');
 
-const file_type = require('file-type');
-const mime_types = require('mime-types');
+const debug = require('debug')('joystream:colossus:api:asset');
 
-const debug = require('debug')('joystream:api:asset');
-
-const util_ranges = require('@joystream/util/ranges');
-const filter = require('@joystream/storage/filter');
+const util_ranges = require('@joystream/storage-utils/ranges');
+const filter = require('@joystream/storage-node-backend/filter');
 
 function error_handler(response, err, code)
 {
@@ -35,7 +32,7 @@ function error_handler(response, err, code)
 }
 
 
-module.exports = function(config, storage, runtime)
+module.exports = function(storage, runtime)
 {
   var doc = {
     // parameters for all operations in this path
@@ -83,15 +80,16 @@ module.exports = function(config, storage, runtime)
     // Put for uploads
     put: async function(req, res, _next)
     {
-      const id = req.params.id;
+      const id = req.params.id; // content id
 
       // First check if we're the liaison for the name, otherwise we can bail
       // out already.
       const role_addr = runtime.identities.key.address;
+      const providerId = runtime.storageProviderId;
       let dataObject;
       try {
         debug('calling checkLiaisonForDataObject')
-        dataObject = await runtime.assets.checkLiaisonForDataObject(role_addr, id);
+        dataObject = await runtime.assets.checkLiaisonForDataObject(providerId, id);
         debug('called checkLiaisonForDataObject')
       } catch (err) {
         error_handler(res, err, 403);
@@ -121,14 +119,14 @@ module.exports = function(config, storage, runtime)
             debug('Detected file info:', info);
 
             // Filter
-            const filter_result = filter(config, req.headers, info.mime_type);
+            const filter_result = filter({}, req.headers, info.mime_type);
             if (200 != filter_result.code) {
               debug('Rejecting content', filter_result.message);
               stream.end();
               res.status(filter_result.code).send({ message: filter_result.message });
 
               // Reject the content
-              await runtime.assets.rejectContent(role_addr, id);
+              await runtime.assets.rejectContent(role_addr, providerId, id);
               return;
             }
             debug('Content accepted.');
@@ -155,20 +153,20 @@ module.exports = function(config, storage, runtime)
           try {
             if (hash !== dataObject.ipfs_content_id.toString()) {
               debug('Rejecting content. IPFS hash does not match value in objectId');
-              await runtime.assets.rejectContent(role_addr, id);
+              await runtime.assets.rejectContent(role_addr, providerId, id);
               res.status(400).send({ message: "Uploaded content doesn't match IPFS hash" });
               return;
             }
 
             debug('accepting Content')
-            await runtime.assets.acceptContent(role_addr, id);
+            await runtime.assets.acceptContent(role_addr, providerId, id);
 
             debug('creating storage relationship for newly uploaded content')
             // Create storage relationship and flip it to ready.
-            const dosr_id = await runtime.assets.createAndReturnStorageRelationship(role_addr, id);
+            const dosr_id = await runtime.assets.createAndReturnStorageRelationship(role_addr, providerId, id);
 
             debug('toggling storage relationship for newly uploaded content')
-            await runtime.assets.toggleStorageRelationshipReady(role_addr, dosr_id, true);
+            await runtime.assets.toggleStorageRelationshipReady(role_addr, providerId, dosr_id, true);
 
             debug('Sending OK response.');
             res.status(200).send({ message: 'Asset uploaded.' });

+ 12 - 7
storage-node/packages/colossus/paths/discover/v0/{id}.js

@@ -1,10 +1,10 @@
-const { discover } = require('@joystream/discovery')
-const debug = require('debug')('joystream:api:discovery');
+const { discover } = require('@joystream/service-discovery')
+const debug = require('debug')('joystream:colossus:api:discovery');
 
 const MAX_CACHE_AGE = 30 * 60 * 1000;
 const USE_CACHE = true;
 
-module.exports = function(config, runtime)
+module.exports = function(runtime)
 {
   var doc = {
     // parameters for all operations in this path
@@ -15,7 +15,7 @@ module.exports = function(config, runtime)
         required: true,
         description: 'Actor accouuntId',
         schema: {
-          type: 'string',
+          type: 'string', // integer ?
         },
       },
     ],
@@ -23,7 +23,13 @@ module.exports = function(config, runtime)
     // Resolve Service Information
     get: async function(req, res)
     {
-        const id = req.params.id;
+        try {
+          var parsedId = parseInt(req.params.id);
+        } catch (err) {
+          return res.status(400).end();
+        }
+
+        const id = parsedId
         let cacheMaxAge = req.query.max_age;
 
         if (cacheMaxAge) {
@@ -47,10 +53,9 @@ module.exports = function(config, runtime)
           } else {
             res.status(200).send(info);
           }
-
         } catch (err) {
           debug(`${err}`);
-          res.status(400).end()
+          res.status(404).end()
         }
     }
   };

+ 0 - 68
storage-node/packages/discovery/IpfsResolver.js

@@ -1,68 +0,0 @@
-const IpfsClient = require('ipfs-http-client')
-const axios = require('axios')
-const { Resolver } = require('./Resolver')
-
-class IpfsResolver extends Resolver {
-    constructor({
-        host = 'localhost',
-        port,
-        mode = 'rpc', // rpc or gateway
-        protocol = 'http', // http or https
-        ipfs,
-        runtime
-    }) {
-        super({runtime})
-
-        if (ipfs) {
-            // use an existing ipfs client instance
-            this.ipfs = ipfs
-        } else if (mode == 'rpc') {
-            port = port || '5001'
-            this.ipfs = IpfsClient(host, port, { protocol })
-        } else if (mode === 'gateway') {
-            port = port || '8080'
-            this.gateway = this.constructUrl(protocol, host, port)
-        } else {
-            throw new Error('Invalid IPFS Resolver options')
-        }
-    }
-
-    async _resolveOverRpc(identity) {
-        const ipnsPath = `/ipns/${identity}/`
-
-        const ipfsName = await this.ipfs.name.resolve(ipnsPath, {
-            recursive: false, // there should only be one indirection to service info file
-            nocache: false,
-        })
-
-        const data = await this.ipfs.get(ipfsName)
-
-        // there should only be one file published under the resolved path
-        const content = data[0].content
-
-        return JSON.parse(content)
-    }
-
-    async _resolveOverGateway(identity) {
-        const url = `${this.gateway}/ipns/${identity}`
-
-        // expected JSON object response
-        const response = await axios.get(url)
-
-        return response.data
-    }
-
-    resolve(accountId) {
-        const identity = this.resolveIdentity(accountId)
-
-        if (this.ipfs) {
-            return this._resolveOverRpc(identity)
-        } else {
-            return this._resolveOverGateway(identity)
-        }
-    }
-}
-
-module.exports = {
-    IpfsResolver
-}

+ 0 - 28
storage-node/packages/discovery/JdsResolver.js

@@ -1,28 +0,0 @@
-const axios = require('axios')
-const { Resolver } = require('./Resolver')
-
-class JdsResolver extends Resolver {
-    constructor({
-        protocol = 'http', // http or https
-        host = 'localhost',
-        port,
-        runtime
-    }) {
-        super({runtime})
-
-        this.baseUrl = this.constructUrl(protocol, host, port)
-    }
-
-    async resolve(accountId) {
-        const url = `${this.baseUrl}/discover/v0/${accountId}`
-
-        // expected JSON object response
-        const response = await axios.get(url)
-
-        return response.data
-    }
-}
-
-module.exports = {
-    JdsResolver
-}

+ 11 - 21
storage-node/packages/discovery/README.md

@@ -1,29 +1,23 @@
 # Discovery
 
-The `@joystream/discovery` package provides an API for role services to publish
+The `@joystream/service-discovery` package provides an API for role services to publish
 discovery information about themselves, and for consumers to resolve this
 information.
 
 In the Joystream network, services are provided by having members stake for a
-role. The role is identified by a unique actor key. Resolving service information
-associated with the actor key is the main purpose of this module.
+role. The role is identified by a worker id. Resolving service information
+associated with the worker id is the main purpose of this module.
 
 This implementation is based on [IPNS](https://docs.ipfs.io/guides/concepts/ipns/)
 as well as runtime information.
 
 ## Discovery Workflow
 
-The discovery workflow provides an actor public key to the `discover()` function, which
+The discovery workflow provides worker id to the `discover()` function, which
 will eventually return structured data.
 
-Clients can verify that the structured data has been signed by the identifying
-actor. This is normally done automatically, unless a `verify: false` option is
-passed to `discover()`. Then, a separate `verify()` call can be used for
-verification.
-
-Under the hood, `discover()` uses any known participating node in the discovery
-network. If no other nodes are known, the bootstrap nodes from the runtime are
-used.
+Under the hood, `discover()` the bootstrap nodes from the runtime are
+used in a browser environment, or the local ipfs node otherwise.
 
 There is a distinction in the discovery workflow:
 
@@ -31,8 +25,8 @@ There is a distinction in the discovery workflow:
   is performed to discover nodes.
 2. If run in a node.js process, instead:
   - A trusted (local) IPFS node must be configured.
-  - The chain is queried to resolve an actor key to an IPNS peer ID.
-  - The trusted IPFS node is used to resolve the IPNS peer ID to an IPFS
+  - The chain is queried to resolve a worker id to an IPNS id.
+  - The trusted IPFS node is used to resolve the IPNS id to an IPFS
     file.
   - The IPFS file is fetched; this contains the structured data.
 
@@ -45,11 +39,10 @@ The publishing workflow is a little more involved, and requires more interaction
 with the runtime and the trusted IPFS node.
 
 1. A service information file is created.
-1. The file is signed with the actor key (see below).
-1. The file is published on IPFS.
+1. The file is published on IPFS, using the IPNS self key of the local node.
 1. The IPNS name of the trusted IPFS node is updated to refer to the published
    file.
-1. The runtime mapping from the actor ID to the IPNS name is updated.
+1. The runtime mapping from the worker ID to the IPNS name is updated.
 
 ## Published Information
 
@@ -57,10 +50,7 @@ Any JSON data can theoretically be published with this system; however, the
 following structure is currently imposed:
 
 - The JSON must be an Object at the top-level, not an Array.
-- Each key must correspond to a service spec (below).
-
-The data is signed using the [@joystream/json-signing](../json-signing/README.md)
-package.
+- Each key must correspond to a [service spec](../../docs/json-signing/README.md).
 
 ## Service Info Specifications
 

+ 0 - 48
storage-node/packages/discovery/Resolver.js

@@ -1,48 +0,0 @@
-class Resolver {
-    constructor ({
-        runtime
-    }) {
-        this.runtime = runtime
-    }
-
-    constructUrl (protocol, host, port) {
-        port = port ? `:${port}` : ''
-        return `${protocol}:://${host}${port}`
-    }
-
-    async resolveServiceInformation(accountId) {
-        let isActor = await this.runtime.identities.isActor(accountId)
-
-        if (!isActor) {
-            throw new Error('Cannot discover non actor account service info')
-        }
-
-        const identity = await this.resolveIdentity(accountId)
-
-        if (identity == null) {
-            // dont waste time trying to resolve if no identity was found
-            throw new Error('no identity to resolve');
-        }
-
-        return this.resolve(accountId)
-    }
-
-    // lookup ipns identity from chain corresponding to accountId
-    // return null if no identity found or record is expired
-    async resolveIdentity(accountId) {
-        const info = await this.runtime.discovery.getAccountInfo(accountId)
-        return info ? info.identity.toString() : null
-    }
-}
-
-Resolver.Error = {};
-Resolver.Error.UnrecognizedProtocol = class UnrecognizedProtocol extends Error {
-    constructor(message) {
-        super(message);
-        this.name = 'UnrecognizedProtocol';
-    }
-}
-
-module.exports = {
-    Resolver
-}

+ 241 - 148
storage-node/packages/discovery/discover.js

@@ -1,182 +1,275 @@
 const axios = require('axios')
-const debug = require('debug')('discovery::discover')
-const stripEndingSlash = require('@joystream/util/stripEndingSlash')
+const debug = require('debug')('joystream:discovery:discover')
+const stripEndingSlash = require('@joystream/storage-utils/stripEndingSlash')
 
 const ipfs = require('ipfs-http-client')('localhost', '5001', { protocol: 'http' })
-
-function inBrowser() {
-    return typeof window !== 'undefined'
+const BN = require('bn.js')
+const { newExternallyControlledPromise } = require('@joystream/storage-utils/externalPromise')
+
+/**
+ * Determines if code is running in a browser by testing for the global window object
+ */
+function inBrowser () {
+  return typeof window !== 'undefined'
 }
 
-var activeDiscoveries = {};
-var accountInfoCache = {};
-const CACHE_TTL = 60 * 60 * 1000;
-
-async function getIpnsIdentity (actorAccountId, runtimeApi) {
-    // lookup ipns identity from chain corresponding to actorAccountId
-    const info = await runtimeApi.discovery.getAccountInfo(actorAccountId)
-
-    if (info == null) {
-        // no identity found on chain for account
-        return null
-    } else {
-        return info.identity.toString()
-    }
+/**
+ * Map storage-provider id to a Promise of a discovery result. The purpose
+ * is to avoid concurrent active discoveries for the same provider.
+ */
+var activeDiscoveries = {}
+
+/**
+ * Map of storage provider id to string
+ * Cache of past discovery lookup results
+ */
+var accountInfoCache = {}
+
+/**
+ * After what period of time a cached record is considered stale, and would
+ * trigger a re-discovery, but only if a query is made for the same provider.
+ */
+const CACHE_TTL = 60 * 60 * 1000
+
+/**
+ * Queries the ipns id (service key) of the storage provider from the blockchain.
+ * If the storage provider is not registered it will return null.
+ * @param {number | BN | u64} storageProviderId - the provider id to lookup
+ * @param { RuntimeApi } runtimeApi - api instance to query the chain
+ * @returns { Promise<string | null> } - ipns multiformat address
+ */
+async function getIpnsIdentity (storageProviderId, runtimeApi) {
+  storageProviderId = new BN(storageProviderId)
+  // lookup ipns identity from chain corresponding to storageProviderId
+  const info = await runtimeApi.discovery.getAccountInfo(storageProviderId)
+
+  if (info == null) {
+    // no identity found on chain for account
+    return null
+  } else {
+    return info.identity.toString()
+  }
 }
 
-async function discover_over_ipfs_http_gateway(actorAccountId, runtimeApi, gateway) {
-    let isActor = await runtimeApi.identities.isActor(actorAccountId)
+/**
+ * Resolves provider id to its service information.
+ * Will use an IPFS HTTP gateway. If caller doesn't provide a url the default gateway on
+ * the local ipfs node will be used.
+ * If the storage provider is not registered it will throw an error
+ * @param {number | BN | u64} storageProviderId - the provider id to lookup
+ * @param {RuntimeApi} runtimeApi - api instance to query the chain
+ * @param {string} gateway - optional ipfs http gateway url to perform ipfs queries
+ * @returns { Promise<object> } - the published service information
+ */
+async function discover_over_ipfs_http_gateway (
+  storageProviderId, runtimeApi, gateway = 'http://localhost:8080') {
 
-    if (!isActor) {
-        throw new Error('Cannot discover non actor account service info')
-    }
+  storageProviderId = new BN(storageProviderId)
+  let isProvider = await runtimeApi.workers.isStorageProvider(storageProviderId)
 
-    const identity = await getIpnsIdentity(actorAccountId, runtimeApi)
+  if (!isProvider) {
+    throw new Error('Cannot discover non storage providers')
+  }
 
-    gateway = gateway || 'http://localhost:8080'
+  const identity = await getIpnsIdentity(storageProviderId, runtimeApi)
 
-    const url = `${gateway}/ipns/${identity}`
+  if (identity == null) {
+    // dont waste time trying to resolve if no identity was found
+    throw new Error('no identity to resolve')
+  }
 
-    const response = await axios.get(url)
+  gateway = stripEndingSlash(gateway)
 
-    return response.data
-}
+  const url = `${gateway}/ipns/${identity}`
 
-async function discover_over_joystream_discovery_service(actorAccountId, runtimeApi, discoverApiEndpoint) {
-    let isActor = await runtimeApi.identities.isActor(actorAccountId)
+  const response = await axios.get(url)
 
-    if (!isActor) {
-        throw new Error('Cannot discover non actor account service info')
-    }
-
-    const identity = await getIpnsIdentity(actorAccountId, runtimeApi)
-
-    if (identity == null) {
-        // dont waste time trying to resolve if no identity was found
-        throw new Error('no identity to resolve');
-    }
-
-    if (!discoverApiEndpoint) {
-        // Use bootstrap nodes
-        let discoveryBootstrapNodes = await runtimeApi.discovery.getBootstrapEndpoints()
+  return response.data
+}
 
-        if (discoveryBootstrapNodes.length) {
-            discoverApiEndpoint = stripEndingSlash(discoveryBootstrapNodes[0].toString())
-        } else {
-            throw new Error('No known discovery bootstrap nodes found on network');
-        }
+/**
+ * Resolves id of provider to its service information.
+ * Will use the provided colossus discovery api endpoint. If no api endpoint
+ * is provided it attempts to use the configured endpoints from the chain.
+ * If the storage provider is not registered it will throw an error
+ * @param {number | BN | u64 } storageProviderId - provider id to lookup
+ * @param {RuntimeApi} runtimeApi - api instance to query the chain
+ * @param {string} discoverApiEndpoint - url for a colossus discovery api endpoint
+ * @returns { Promise<object> } - the published service information
+ */
+async function discover_over_joystream_discovery_service (storageProviderId, runtimeApi, discoverApiEndpoint) {
+  storageProviderId = new BN(storageProviderId)
+  let isProvider = await runtimeApi.workers.isStorageProvider(storageProviderId)
+
+  if (!isProvider) {
+    throw new Error('Cannot discover non storage providers')
+  }
+
+  const identity = await getIpnsIdentity(storageProviderId, runtimeApi)
+
+  // dont waste time trying to resolve if no identity was found
+  if (identity == null) {
+    throw new Error('no identity to resolve')
+  }
+
+  if (!discoverApiEndpoint) {
+    // Use bootstrap nodes
+    let discoveryBootstrapNodes = await runtimeApi.discovery.getBootstrapEndpoints()
+
+    if (discoveryBootstrapNodes.length) {
+      discoverApiEndpoint = stripEndingSlash(discoveryBootstrapNodes[0].toString())
+    } else {
+      throw new Error('No known discovery bootstrap nodes found on network')
     }
+  }
 
-    const url = `${discoverApiEndpoint}/discover/v0/${actorAccountId}`
+  const url = `${discoverApiEndpoint}/discover/v0/${storageProviderId.toNumber()}`
 
-    // should have parsed if data was json?
-    const response = await axios.get(url)
+  // should have parsed if data was json?
+  const response = await axios.get(url)
 
-    return response.data
+  return response.data
 }
 
-async function discover_over_local_ipfs_node(actorAccountId, runtimeApi) {
-    let isActor = await runtimeApi.identities.isActor(actorAccountId)
+/**
+ * Resolves id of provider to its service information.
+ * Will use the local IPFS node over RPC interface.
+ * If the storage provider is not registered it will throw an error.
+ * @param {number | BN | u64 } storageProviderId - provider id to lookup
+ * @param {RuntimeApi} runtimeApi - api instance to query the chain
+ * @returns { Promise<object> } - the published service information
+ */
+async function discover_over_local_ipfs_node (storageProviderId, runtimeApi) {
+  storageProviderId = new BN(storageProviderId)
+  let isProvider = await runtimeApi.workers.isStorageProvider(storageProviderId)
+
+  if (!isProvider) {
+    throw new Error('Cannot discover non storage providers')
+  }
+
+  const identity = await getIpnsIdentity(storageProviderId, runtimeApi)
+
+  if (identity == null) {
+    // dont waste time trying to resolve if no identity was found
+    throw new Error('no identity to resolve')
+  }
+
+  const ipns_address = `/ipns/${identity}/`
+
+  debug('resolved ipns to ipfs object')
+  // Can this call hang forever!? can/should we set a timeout?
+  let ipfs_name = await ipfs.name.resolve(ipns_address, {
+    // don't recurse, there should only be one indirection to the service info file
+    recursive: false,
+    nocache: false
+  })
+
+  debug('getting ipfs object', ipfs_name)
+  let data = await ipfs.get(ipfs_name) // this can sometimes hang forever!?! can we set a timeout?
+
+  // there should only be one file published under the resolved path
+  let content = data[0].content
+
+  return JSON.parse(content)
+}
 
-    if (!isActor) {
-        throw new Error('Cannot discover non actor account service info')
+/**
+ * Cached discovery of storage provider service information. If useCachedValue is
+ * set to true, will always return the cached result if found. New discovery will be triggered
+ * if record is found to be stale. If a stale record is not desired (CACHE_TTL old) pass a non zero
+ * value for maxCacheAge, which will force a new discovery and return the new resolved value.
+ * This method in turn calls _discovery which handles concurrent discoveries and selects the appropriate
+ * protocol to perform the query.
+ * If the storage provider is not registered it will resolve to null
+ * @param {number | BN | u64} storageProviderId - provider to discover
+ * @param {RuntimeApi} runtimeApi - api instance to query the chain
+ * @param {bool} useCachedValue - optionaly use chached queries
+ * @param {number} maxCacheAge - maximum age of a cached query that triggers automatic re-discovery
+ * @returns { Promise<object | null> } - the published service information
+ */
+async function discover (storageProviderId, runtimeApi, useCachedValue = false, maxCacheAge = 0) {
+  storageProviderId = new BN(storageProviderId)
+  const id = storageProviderId.toNumber()
+  const cached = accountInfoCache[id]
+
+  if (cached && useCachedValue) {
+    if (maxCacheAge > 0) {
+      // get latest value
+      if (Date.now() > (cached.updated + maxCacheAge)) {
+        return _discover(storageProviderId, runtimeApi)
+      }
     }
-
-    const identity = await getIpnsIdentity(actorAccountId, runtimeApi)
-
-    const ipns_address = `/ipns/${identity}/`
-
-    debug('resolved ipns to ipfs object')
-    let ipfs_name = await ipfs.name.resolve(ipns_address, {
-        recursive: false, // there should only be one indirection to service info file
-        nocache: false,
-    }) // this can hang forever!? can we set a timeout?
-
-    debug('getting ipfs object', ipfs_name)
-    let data = await ipfs.get(ipfs_name) // this can sometimes hang forever!?! can we set a timeout?
-
-    // there should only be one file published under the resolved path
-    let content = data[0].content
-
-    // verify information and if 'discovery' service found
-    // add it to our list of bootstrap nodes
-
-    // TODO cache result or flag
-    return JSON.parse(content)
+    // refresh if cache if stale, new value returned on next cached query
+    if (Date.now() > (cached.updated + CACHE_TTL)) {
+      _discover(storageProviderId, runtimeApi)
+    }
+    // return best known value
+    return cached.value
+  } else {
+    return _discover(storageProviderId, runtimeApi)
+  }
 }
 
-async function discover (actorAccountId, runtimeApi, useCachedValue = false, maxCacheAge = 0) {
-    const id = actorAccountId.toString();
-    const cached = accountInfoCache[id];
-
-    if (cached && useCachedValue) {
-        if (maxCacheAge > 0) {
-            // get latest value
-            if (Date.now() > (cached.updated + maxCacheAge)) {
-                return _discover(actorAccountId, runtimeApi);
-            }
-        }
-        // refresh if cache is stale, new value returned on next cached query
-        if (Date.now() > (cached.updated + CACHE_TTL)) {
-            _discover(actorAccountId, runtimeApi);
-        }
-        // return best known value
-        return cached.value;
+/**
+ * Internal method that handles concurrent discoveries and caching of results. Will
+ * select the appropriate discovery protocol based on wether we are in a browser environemtn or not.
+ * If not in a browser it expects a local ipfs node to be running.
+ * @param {number | BN | u64} storageProviderId
+ * @param {RuntimeApi} runtimeApi - api instance for querying the chain
+ * @returns { Promise<object | null> } - the published service information
+ */
+async function _discover (storageProviderId, runtimeApi) {
+  storageProviderId = new BN(storageProviderId)
+  const id = storageProviderId.toNumber()
+
+  const discoveryResult = activeDiscoveries[id]
+  if (discoveryResult) {
+    debug('discovery in progress waiting for result for', id)
+    return discoveryResult
+  }
+
+  debug('starting new discovery for', id)
+  const deferredDiscovery = newExternallyControlledPromise()
+  activeDiscoveries[id] = deferredDiscovery.promise
+
+  let result
+  try {
+    if (inBrowser()) {
+      result = await discover_over_joystream_discovery_service(storageProviderId, runtimeApi)
     } else {
-        return _discover(actorAccountId, runtimeApi);
+      result = await discover_over_local_ipfs_node(storageProviderId, runtimeApi)
     }
-}
-
-function createExternallyControlledPromise() {
-    let resolve, reject;
-    const promise = new Promise((_resolve, _reject) => {
-        resolve = _resolve;
-        reject = _reject;
-    });
-    return ({ resolve, reject, promise });
-}
 
-async function _discover(actorAccountId, runtimeApi) {
-    const id = actorAccountId.toString();
-
-    const discoveryResult = activeDiscoveries[id];
-    if (discoveryResult) {
-        debug('discovery in progress waiting for result for',id);
-        return discoveryResult
+    debug(result)
+    result = JSON.stringify(result)
+    accountInfoCache[id] = {
+      value: result,
+      updated: Date.now()
     }
 
-    debug('starting new discovery for', id);
-    const deferredDiscovery = createExternallyControlledPromise();
-    activeDiscoveries[id] = deferredDiscovery.promise;
-
-    let result;
-    try {
-        if (inBrowser()) {
-            result = await discover_over_joystream_discovery_service(actorAccountId, runtimeApi)
-        } else {
-            result = await discover_over_local_ipfs_node(actorAccountId, runtimeApi)
-        }
-        debug(result)
-        result = JSON.stringify(result)
-        accountInfoCache[id] = {
-            value: result,
-            updated: Date.now()
-        };
-
-        deferredDiscovery.resolve(result);
-        delete activeDiscoveries[id];
-        return result;
-    } catch (err) {
-        debug(err.message);
-        deferredDiscovery.reject(err);
-        delete activeDiscoveries[id];
-        throw err;
-    }
+    deferredDiscovery.resolve(result)
+    delete activeDiscoveries[id]
+    return result
+  } catch (err) {
+    // we catch the error so we can update all callers
+    // and throw again to inform the first caller.
+    debug(err.message)
+    delete activeDiscoveries[id]
+    // deferredDiscovery.reject(err)
+    deferredDiscovery.resolve(null) // resolve to null until we figure out the issue below
+    // throw err // <-- throwing but this isn't being
+    // caught correctly in express server! Is it because there is an uncaught promise somewhere
+    // in the prior .reject() call ?
+    // I've only seen this behaviour when error is from ipfs-client
+    // ... is this unique to errors thrown from ipfs-client?
+    // Problem is its crashing the node so just return null for now
+    return null
+  }
 }
 
 module.exports = {
-    discover,
-    discover_over_joystream_discovery_service,
-    discover_over_ipfs_http_gateway,
-    discover_over_local_ipfs_node,
-}
+  discover,
+  discover_over_joystream_discovery_service,
+  discover_over_ipfs_http_gateway,
+  discover_over_local_ipfs_node
+}

+ 13 - 7
storage-node/packages/discovery/example.js

@@ -1,14 +1,18 @@
-const { RuntimeApi } = require('@joystream/runtime-api')
+const { RuntimeApi } = require('@joystream/storage-runtime-api')
 
 const { discover, publish } = require('./')
 
 async function main() {
+    // The assigned storage-provider id
+    const provider_id = 0
+
     const runtimeApi = await RuntimeApi.create({
-        account_file: "/Users/mokhtar/Downloads/5Gn9n7SDJ7VgHqHQWYzkSA4vX6DCmS5TFWdHxikTXp9b4L32.json"
+        // Path to the role account key file of the provider
+        account_file: "/path/to/role_account_key_file.json",
+        storageProviderId: provider_id
     })
 
-    let published = await publish.publish(
-        "5Gn9n7SDJ7VgHqHQWYzkSA4vX6DCmS5TFWdHxikTXp9b4L32",
+    let ipns_id = await publish.publish(
         {
             asset: {
                 version: 1,
@@ -18,11 +22,13 @@ async function main() {
         runtimeApi
     )
 
-    console.log(published)
+    console.log(ipns_id)
+
+    // register ipns_id on chain
+    await runtimeApi.setAccountInfo(ipfs_id)
 
-    // let serviceInfo = await discover('5Gn9n7SDJ7VgHqHQWYzkSA4vX6DCmS5TFWdHxikTXp9b4L32', { runtimeApi })
     let serviceInfo = await discover.discover(
-        '5Gn9n7SDJ7VgHqHQWYzkSA4vX6DCmS5TFWdHxikTXp9b4L32',
+        provider_id,
         runtimeApi
     )
 

+ 4 - 3
storage-node/packages/discovery/package.json

@@ -1,5 +1,6 @@
 {
-  "name": "@joystream/discovery",
+  "name": "@joystream/service-discovery",
+  "private": true,
   "version": "0.1.0",
   "description": "Service Discovery - Joystream Storage Node",
   "author": "Joystream",
@@ -43,8 +44,8 @@
     "temp": "^0.9.0"
   },
   "dependencies": {
-    "@joystream/runtime-api": "^0.1.0",
-    "@joystream/util": "^0.1.0",
+    "@joystream/storage-runtime-api": "^0.1.0",
+    "@joystream/storage-utils": "^0.1.0",
     "async-lock": "^1.2.0",
     "axios": "^0.18.0",
     "chalk": "^2.4.2",

+ 71 - 37
storage-node/packages/discovery/publish.js

@@ -1,53 +1,87 @@
 const ipfsClient = require('ipfs-http-client')
 const ipfs = ipfsClient('localhost', '5001', { protocol: 'http' })
 
-const debug = require('debug')('discovery::publish')
+const debug = require('debug')('joystream:discovery:publish')
 
-const PUBLISH_KEY = 'self'; // 'services';
+/**
+ * The name of the key used for publishing. We use same key used by the ipfs node
+ * for the network identitiy, to make it possible to identify the ipfs node of the storage
+ * provider and use `ipfs ping` to check on the uptime of a particular node.
+ */
+const PUBLISH_KEY = 'self'
 
-function bufferFrom(data) {
-    return Buffer.from(JSON.stringify(data), 'utf-8')
+/**
+ * Applies JSON serialization on the data object and converts the utf-8
+ * string to a Buffer.
+ * @param {object} data - json object
+ * @returns {Buffer}
+ */
+function bufferFrom (data) {
+  return Buffer.from(JSON.stringify(data), 'utf-8')
 }
 
-function encodeServiceInfo(info) {
-    return bufferFrom({
-        serialized: JSON.stringify(info),
-        // signature: ''
-    })
+/**
+ * Encodes the service info into a standard format see. /storage-node/docs/json-signing.md
+ * To be able to add a signature over the json data. Signing is not currently implemented.
+ * @param {object} info - json object
+ * @returns {Buffer}
+ */
+function encodeServiceInfo (info) {
+  return bufferFrom({
+    serialized: JSON.stringify(info)
+  })
 }
 
+/**
+ * Publishes the service information, encoded using the standard defined in encodeServiceInfo()
+ * to ipfs, using the local ipfs node's PUBLISH_KEY, and returns the key id used to publish.
+ * What we refer to as the ipns id.
+ * @param {object} service_info - the service information to publish
+ * @returns {string} - the ipns id
+ */
 async function publish (service_info) {
-    const keys = await ipfs.key.list()
-    let services_key = keys.find((key) => key.name === PUBLISH_KEY)
-
-    // generate a new services key if not found
-    if (PUBLISH_KEY !== 'self' && !services_key) {
-        debug('generating ipns services key')
-        services_key = await ipfs.key.gen(PUBLISH_KEY, {
-          type: 'rsa',
-          size: 2048
-        });
-    }
-
-    if (!services_key) {
-        throw new Error('No IPFS publishing key available!')
-    }
-
-    debug('adding service info file to node')
-    const files = await ipfs.add(encodeServiceInfo(service_info))
-
-    debug('publishing...')
-    const published = await ipfs.name.publish(files[0].hash, {
-        key: PUBLISH_KEY,
-        resolve: false,
-        // lifetime: // string - Time duration of the record. Default: 24h
-        // ttl:      // string - Time duration this record should be cached
+  const keys = await ipfs.key.list()
+  let services_key = keys.find((key) => key.name === PUBLISH_KEY)
+
+  // An ipfs node will always have the self key.
+  // If the publish key is specified as anything else and it doesn't exist
+  // we create it.
+  if (PUBLISH_KEY !== 'self' && !services_key) {
+    debug('generating ipns services key')
+    services_key = await ipfs.key.gen(PUBLISH_KEY, {
+      type: 'rsa',
+      size: 2048
     })
+  }
+
+  if (!services_key) {
+    throw new Error('No IPFS publishing key available!')
+  }
+
+  debug('adding service info file to node')
+  const files = await ipfs.add(encodeServiceInfo(service_info))
+
+  debug('publishing...')
+  const published = await ipfs.name.publish(files[0].hash, {
+    key: PUBLISH_KEY,
+    resolve: false
+    // lifetime: // string - Time duration of the record. Default: 24h
+    // ttl:      // string - Time duration this record should be cached
+  })
+
+  // The name and ipfs hash of the published service information file, eg.
+  // {
+  //   name: 'QmUNQCkaU1TRnc1WGixqEP3Q3fazM8guSdFRsdnSJTN36A',
+  //   value: '/ipfs/QmcSjtVMfDSSNYCxNAb9PxNpEigCw7h1UZ77gip3ghfbnA'
+  // }
+  // .. The name is equivalent to the key id that was used.
+  debug(published)
 
-    debug(published)
-    return services_key.id;
+  // Return the key id under which the content was published. Which is used
+  // to lookup the actual ipfs content id of the published service information
+  return services_key.id
 }
 
 module.exports = {
-    publish
+  publish
 }

+ 1 - 2
storage-node/packages/helios/README.md

@@ -6,7 +6,6 @@ A basic tool to scan the joystream storage network to get a birds eye view of th
 ## Scanning
 
 ```
-yarn
-yarn run helios
+yarn helios
 ```
 

+ 105 - 88
storage-node/packages/helios/bin/cli.js

@@ -1,125 +1,127 @@
 #!/usr/bin/env node
 
-const { RuntimeApi } = require('@joystream/runtime-api');
+const { RuntimeApi } = require('@joystream/storage-runtime-api')
 const { encodeAddress } = require('@polkadot/keyring')
-const { discover } = require('@joystream/discovery');
-const axios = require('axios');
-const stripEndingSlash = require('@joystream/util/stripEndingSlash');
+const { discover } = require('@joystream/service-discovery')
+const axios = require('axios')
+const stripEndingSlash = require('@joystream/storage-utils/stripEndingSlash')
 
-(async function main () {
-
-  const runtime = await RuntimeApi.create();
-  const api  = runtime.api;
+async function main () {
+  const runtime = await RuntimeApi.create()
+  const { api } = runtime
 
   // get current blockheight
-  const currentHeader = await api.rpc.chain.getHeader();
-  const currentHeight = currentHeader.number.toBn();
+  const currentHeader = await api.rpc.chain.getHeader()
+  const currentHeight = currentHeader.number.toBn()
 
   // get all providers
-  const storageProviders = await api.query.actors.accountIdsByRole(0);
+  const { ids: storageProviders } = await runtime.workers.getAllProviders()
+  console.log(`Found ${storageProviders.length} staked providers`)
 
-  const storageProviderAccountInfos = await Promise.all(storageProviders.map(async (account) => {
+  const storageProviderAccountInfos = await Promise.all(storageProviders.map(async (providerId) => {
     return ({
-      account,
-      info: await runtime.discovery.getAccountInfo(account),
-      joined: (await api.query.actors.actorByAccountId(account)).unwrap().joined_at
-    });
-  }));
+      providerId,
+      info: await runtime.discovery.getAccountInfo(providerId)
+    })
+  }))
 
-  const liveProviders = storageProviderAccountInfos.filter(({account, info}) => {
+  // providers that have updated their account info and published ipfs id
+  // considered live if the record hasn't expired yet
+  const liveProviders = storageProviderAccountInfos.filter(({info}) => {
     return info && info.expires_at.gte(currentHeight)
-  });
+  })
 
-  const downProviders = storageProviderAccountInfos.filter(({account, info}) => {
+  const downProviders = storageProviderAccountInfos.filter(({info}) => {
     return info == null
-  });
+  })
 
-  const expiredTtlProviders = storageProviderAccountInfos.filter(({account, info}) => {
+  const expiredTtlProviders = storageProviderAccountInfos.filter(({info}) => {
     return info && currentHeight.gte(info.expires_at)
-  });
+  })
 
-  let providersStatuses = mapInfoToStatus(liveProviders, currentHeight);
-  console.log('\n== Live Providers\n', providersStatuses);
+  let providersStatuses = mapInfoToStatus(liveProviders, currentHeight)
+  console.log('\n== Live Providers\n', providersStatuses)
 
   let expiredProviderStatuses = mapInfoToStatus(expiredTtlProviders, currentHeight)
-  console.log('\n== Expired Providers\n', expiredProviderStatuses);
+  console.log('\n== Expired Providers\n', expiredProviderStatuses)
 
-  // check when actor account was created consider grace period before removing
   console.log('\n== Down Providers!\n', downProviders.map(provider => {
     return ({
-      account: provider.account.toString(),
-      age: currentHeight.sub(provider.joined).toNumber()
+      providerId: provider.providerId
     })
-  }));
+  }))
 
   // Resolve IPNS identities of providers
   console.log('\nResolving live provider API Endpoints...')
-  //providersStatuses = providersStatuses.concat(expiredProviderStatuses);
-  let endpoints = await Promise.all(providersStatuses.map(async (status) => {
+  let endpoints = await Promise.all(providersStatuses.map(async ({providerId}) => {
     try {
-      let serviceInfo = await discover.discover_over_joystream_discovery_service(status.address, runtime);
-      let info = JSON.parse(serviceInfo.serialized);
-      console.log(`${status.address} -> ${info.asset.endpoint}`);
-      return { address: status.address, endpoint: info.asset.endpoint};
+      let serviceInfo = await discover.discover_over_joystream_discovery_service(providerId, runtime)
+
+      if (serviceInfo == null) {
+        console.log(`provider ${providerId} has not published service information`)
+        return { providerId, endpoint: null }
+      }
+
+      let info = JSON.parse(serviceInfo.serialized)
+      console.log(`${providerId} -> ${info.asset.endpoint}`)
+      return { providerId, endpoint: info.asset.endpoint }
     } catch (err) {
-      console.log('resolve failed', status.address, err.message);
-      return { address: status.address, endpoint: null};
+      console.log('resolve failed for id', providerId, err.message)
+      return { providerId, endpoint: null }
     }
-  }));
+  }))
 
-  console.log('\nChecking API Endpoint is online')
+  console.log('\nChecking API Endpoints are online')
   await Promise.all(endpoints.map(async (provider) => {
     if (!provider.endpoint) {
-      console.log('skipping', provider.address);
+      console.log('skipping', provider.address)
       return
     }
-    const swaggerUrl = `${stripEndingSlash(provider.endpoint)}/swagger.json`;
-    let error;
+    const swaggerUrl = `${stripEndingSlash(provider.endpoint)}/swagger.json`
+    let error
     try {
       await axios.get(swaggerUrl)
-    } catch (err) {error = err}
-    console.log(`${provider.endpoint} - ${error ? error.message : 'OK'}`);
-  }));
+      // maybe print out api version information to detect which version of colossus is running?
+      // or add anothe api endpoint for diagnostics information
+    } catch (err) { error = err }
+    console.log(`${provider.endpoint} - ${error ? error.message : 'OK'}`)
+  }))
 
-  // after resolving for each resolved provider, HTTP HEAD with axios all known content ids
-  // report available/known
   let knownContentIds = await runtime.assets.getKnownContentIds()
+  console.log(`\nData Directory has ${knownContentIds.length} assets`)
 
-  console.log(`\nContent Directory has ${knownContentIds.length} assets`);
-
+  // Check which providers are reporting a ready relationship for each asset
   await Promise.all(knownContentIds.map(async (contentId) => {
-    let [relationships, judgement] = await assetRelationshipState(api, contentId, storageProviders);
-    console.log(`${encodeAddress(contentId)} replication ${relationships}/${storageProviders.length} - ${judgement}`);
-  }));
-
-  console.log('\nChecking available assets on providers...');
-
-  endpoints.map(async ({address, endpoint}) => {
-    if (!endpoint) { return }
-    let { found, content } = await countContentAvailability(knownContentIds, endpoint);
-    console.log(`${address}: has ${found} assets`);
-    return content
-  });
-
+    let [relationshipsCount, judgement] = await assetRelationshipState(api, contentId, storageProviders)
+    console.log(`${encodeAddress(contentId)} replication ${relationshipsCount}/${storageProviders.length} - ${judgement}`)
+  }))
 
   // interesting disconnect doesn't work unless an explicit provider was created
   // for underlying api instance
-  runtime.api.disconnect();
-})();
+  // We no longer need a connection to the chain
+  api.disconnect()
+
+  console.log(`\nChecking available assets on providers (this can take some time)...`)
+  endpoints.forEach(async ({ providerId, endpoint }) => {
+    if (!endpoint) { return }
+    const total = knownContentIds.length
+    let { found } = await countContentAvailability(knownContentIds, endpoint)
+    console.log(`provider ${providerId}: has ${found} out of ${total}`)
+  })
+}
 
-function mapInfoToStatus(providers, currentHeight) {
-  return providers.map(({account, info, joined}) => {
+function mapInfoToStatus (providers, currentHeight) {
+  return providers.map(({providerId, info}) => {
     if (info) {
       return {
-        address: account.toString(),
-        age: currentHeight.sub(joined).toNumber(),
+        providerId,
         identity: info.identity.toString(),
         expiresIn: info.expires_at.sub(currentHeight).toNumber(),
-        expired: currentHeight.gte(info.expires_at),
+        expired: currentHeight.gte(info.expires_at)
       }
     } else {
       return {
-        address: account.toString(),
+        providerId,
         identity: null,
         status: 'down'
       }
@@ -127,40 +129,55 @@ function mapInfoToStatus(providers, currentHeight) {
   })
 }
 
-async function countContentAvailability(contentIds, source) {
+// HTTP HEAD with axios all known content ids on each provider
+async function countContentAvailability (contentIds, source) {
   let content = {}
-  let found = 0;
-  for(let i = 0; i < contentIds.length; i++) {
-    const assetUrl = makeAssetUrl(contentIds[i], source);
+  let found = 0
+  let missing = 0
+  for (let i = 0; i < contentIds.length; i++) {
+    const assetUrl = makeAssetUrl(contentIds[i], source)
     try {
       let info = await axios.head(assetUrl)
       content[encodeAddress(contentIds[i])] = {
         type: info.headers['content-type'],
         bytes: info.headers['content-length']
       }
+      // TODO: cross check against dataobject size
       found++
-    } catch(err) { console.log(`${assetUrl} ${err.message}`); continue; }
+    } catch (err) {
+      missing++
+    }
   }
-  console.log(content);
-  return { found, content };
+
+  return { found, missing, content }
 }
 
-function makeAssetUrl(contentId, source) {
-  source = stripEndingSlash(source);
+function makeAssetUrl (contentId, source) {
+  source = stripEndingSlash(source)
   return `${source}/asset/v0/${encodeAddress(contentId)}`
 }
 
-async function assetRelationshipState(api, contentId, providers) {
-  let dataObject = await api.query.dataDirectory.dataObjectByContentId(contentId);
+async function assetRelationshipState (api, contentId, providers) {
+  let dataObject = await api.query.dataDirectory.dataObjectByContentId(contentId)
 
-  // how many relationships out of active providers?
-  let relationshipIds = await api.query.dataObjectStorageRegistry.relationshipsByContentId(contentId);
+  let relationshipIds = await api.query.dataObjectStorageRegistry.relationshipsByContentId(contentId)
 
+  // how many relationships associated with active providers and in ready state
   let activeRelationships = await Promise.all(relationshipIds.map(async (id) => {
-    let relationship = await api.query.dataObjectStorageRegistry.relationships(id);
+    let relationship = await api.query.dataObjectStorageRegistry.relationships(id)
     relationship = relationship.unwrap()
+    // only interested in ready relationships
+    if (!relationship.ready) {
+      return undefined
+    }
+    // Does the relationship belong to an active provider ?
     return providers.find((provider) => relationship.storage_provider.eq(provider))
-  }));
+  }))
+
+  return ([
+    activeRelationships.filter(active => active).length,
+    dataObject.unwrap().liaison_judgement
+  ])
+}
 
-  return [activeRelationships.filter(active => active).length, dataObject.unwrap().liaison_judgement]
-}
+main()

+ 2 - 1
storage-node/packages/helios/package.json

@@ -1,5 +1,6 @@
 {
   "name": "@joystream/helios",
+  "private": true,
   "version": "0.1.0",
   "bin": {
     "helios": "bin/cli.js"
@@ -9,7 +10,7 @@
   },
   "license": "MIT",
   "dependencies": {
-    "@joystream/runtime-api": "^0.1.0",
+    "@joystream/storage-runtime-api": "^0.1.0",
     "@types/bn.js": "^4.11.5",
     "axios": "^0.19.0",
     "bn.js": "^4.11.8"

+ 79 - 89
storage-node/packages/runtime-api/assets.js

@@ -1,14 +1,9 @@
-'use strict';
+'use strict'
 
-const debug = require('debug')('joystream:runtime:assets');
+const debug = require('debug')('joystream:runtime:assets')
+const { decodeAddress } = require('@polkadot/keyring')
 
-const { Null } = require('@polkadot/types/primitive');
-
-const { _ } = require('lodash');
-
-const { decodeAddress, encodeAddress } = require('@polkadot/keyring');
-
-function parseContentId(contentId) {
+function parseContentId (contentId) {
   try {
     return decodeAddress(contentId)
   } catch (err) {
@@ -19,158 +14,153 @@ function parseContentId(contentId) {
 /*
  * Add asset related functionality to the substrate API.
  */
-class AssetsApi
-{
-  static async create(base)
-  {
-    const ret = new AssetsApi();
-    ret.base = base;
-    await ret.init();
-    return ret;
+class AssetsApi {
+  static async create (base) {
+    const ret = new AssetsApi()
+    ret.base = base
+    await ret.init()
+    return ret
   }
 
-  async init(account_file)
-  {
-    debug('Init');
+  async init () {
+    debug('Init')
   }
 
   /*
-   * Create a data object.
+   * Create and return a data object.
    */
-  async createDataObject(accountId, contentId, doTypeId, size)
-  {
+  async createDataObject (accountId, memberId, contentId, doTypeId, size, ipfsCid) {
     contentId = parseContentId(contentId)
-    const tx = this.base.api.tx.dataDirectory.addContent(contentId, doTypeId, size);
-    await this.base.signAndSend(accountId, tx);
+    const tx = this.base.api.tx.dataDirectory.addContent(memberId, contentId, doTypeId, size, ipfsCid)
+    await this.base.signAndSend(accountId, tx)
 
     // If the data object constructed properly, we should now be able to return
     // the data object from the state.
-    return await this.getDataObject(contentId);
+    return this.getDataObject(contentId)
   }
 
   /*
-   * Return the Data Object for a CID
+   * Return the Data Object for a contendId
    */
-  async getDataObject(contentId)
-  {
+  async getDataObject (contentId) {
     contentId = parseContentId(contentId)
-    const obj = await this.base.api.query.dataDirectory.dataObjectByContentId(contentId);
-    return obj;
+    return this.base.api.query.dataDirectory.dataObjectByContentId(contentId)
   }
 
   /*
-   * Verify the liaison state for a DO:
-   * - Check the content ID has a DO
-   * - Check the account is the liaison
-   * - Check the liaison state is pending
+   * Verify the liaison state for a DataObject:
+   * - Check the content ID has a DataObject
+   * - Check the storageProviderId is the liaison
+   * - Check the liaison state is Pending
    *
    * Each failure errors out, success returns the data object.
    */
-  async checkLiaisonForDataObject(accountId, contentId)
-  {
+  async checkLiaisonForDataObject (storageProviderId, contentId) {
     contentId = parseContentId(contentId)
 
-    let obj = await this.getDataObject(contentId);
+    let obj = await this.getDataObject(contentId)
 
     if (obj.isNone) {
-      throw new Error(`No DataObject created for content ID: ${contentId}`);
+      throw new Error(`No DataObject created for content ID: ${contentId}`)
     }
 
-    const encoded = encodeAddress(obj.raw.liaison);
-    if (encoded != accountId) {
-      throw new Error(`This storage node is not liaison for the content ID: ${contentId}`);
+    obj = obj.unwrap()
+
+    if (!obj.liaison.eq(storageProviderId)) {
+      throw new Error(`This storage node is not liaison for the content ID: ${contentId}`)
     }
 
-    if (obj.raw.liaison_judgement.type != 'Pending') {
-      throw new Error(`Expected Pending judgement, but found: ${obj.raw.liaison_judgement.type}`);
+    if (obj.liaison_judgement.type !== 'Pending') {
+      throw new Error(`Expected Pending judgement, but found: ${obj.liaison_judgement.type}`)
     }
 
-    return obj.unwrap();
+    return obj
   }
 
   /*
-   * Changes a data object liaison judgement.
+   * Sets the data object liaison judgement to Accepted
    */
-  async acceptContent(accountId, contentId)
-  {
+  async acceptContent (providerAccoundId, storageProviderId, contentId) {
     contentId = parseContentId(contentId)
-    const tx = this.base.api.tx.dataDirectory.acceptContent(contentId);
-    return await this.base.signAndSend(accountId, tx);
+    const tx = this.base.api.tx.dataDirectory.acceptContent(storageProviderId, contentId)
+    return this.base.signAndSend(providerAccoundId, tx)
   }
 
   /*
-   * Changes a data object liaison judgement.
+   * Sets the data object liaison judgement to Rejected
    */
-  async rejectContent(accountId, contentId)
-  {
+  async rejectContent (providerAccountId, storageProviderId, contentId) {
     contentId = parseContentId(contentId)
-    const tx = this.base.api.tx.dataDirectory.rejectContent(contentId);
-    return await this.base.signAndSend(accountId, tx);
+    const tx = this.base.api.tx.dataDirectory.rejectContent(storageProviderId, contentId)
+    return this.base.signAndSend(providerAccountId, tx)
   }
 
   /*
-   * Create storage relationship
+   * Creates storage relationship for a data object and provider
    */
-  async createStorageRelationship(accountId, contentId, callback)
-  {
+  async createStorageRelationship (providerAccountId, storageProviderId, contentId, callback) {
     contentId = parseContentId(contentId)
-    const tx = this.base.api.tx.dataObjectStorageRegistry.addRelationship(contentId);
+    const tx = this.base.api.tx.dataObjectStorageRegistry.addRelationship(storageProviderId, contentId)
 
-    const subscribed = [['dataObjectStorageRegistry', 'DataObjectStorageRelationshipAdded']];
-    return await this.base.signAndSend(accountId, tx, 3, subscribed, callback);
+    const subscribed = [['dataObjectStorageRegistry', 'DataObjectStorageRelationshipAdded']]
+    return this.base.signAndSend(providerAccountId, tx, 3, subscribed, callback)
   }
 
   /*
-   * Get storage relationship for contentId
+   * Gets storage relationship for contentId for the given provider
    */
-  async getStorageRelationshipAndId(accountId, contentId) {
+  async getStorageRelationshipAndId (storageProviderId, contentId) {
     contentId = parseContentId(contentId)
-    let rids = await this.base.api.query.dataObjectStorageRegistry.relationshipsByContentId(contentId);
-
-    while(rids.length) {
-      const relationshipId = rids.shift();
-      let relationship = await this.base.api.query.dataObjectStorageRegistry.relationships(relationshipId);
-      relationship = relationship.unwrap();
-      if (relationship.storage_provider.eq(decodeAddress(accountId))) {
-        return ({ relationship, relationshipId });
+    let rids = await this.base.api.query.dataObjectStorageRegistry.relationshipsByContentId(contentId)
+
+    while (rids.length) {
+      const relationshipId = rids.shift()
+      let relationship = await this.base.api.query.dataObjectStorageRegistry.relationships(relationshipId)
+      relationship = relationship.unwrap()
+      if (relationship.storage_provider.eq(storageProviderId)) {
+        return ({ relationship, relationshipId })
       }
     }
 
-    return {};
+    return {}
   }
 
-  async createAndReturnStorageRelationship(accountId, contentId)
-  {
+  /*
+   * Creates storage relationship for a data object and provider and returns the relationship id
+   */
+  async createAndReturnStorageRelationship (providerAccountId, storageProviderId, contentId) {
     contentId = parseContentId(contentId)
     return new Promise(async (resolve, reject) => {
       try {
-        await this.createStorageRelationship(accountId, contentId, (events) => {
+        await this.createStorageRelationship(providerAccountId, storageProviderId, contentId, (events) => {
           events.forEach((event) => {
-            resolve(event[1].DataObjectStorageRelationshipId);
-          });
-        });
+            resolve(event[1].DataObjectStorageRelationshipId)
+          })
+        })
       } catch (err) {
-        reject(err);
+        reject(err)
       }
-    });
+    })
   }
 
   /*
-   * Toggle ready state for DOSR.
+   * Set the ready state for a data object storage relationship to the new value
    */
-  async toggleStorageRelationshipReady(accountId, dosrId, ready)
-  {
+  async toggleStorageRelationshipReady (providerAccountId, storageProviderId, dosrId, ready) {
     var tx = ready
-      ? this.base.api.tx.dataObjectStorageRegistry.setRelationshipReady(dosrId)
-      : this.base.api.tx.dataObjectStorageRegistry.unsetRelationshipReady(dosrId);
-    return await this.base.signAndSend(accountId, tx);
+      ? this.base.api.tx.dataObjectStorageRegistry.setRelationshipReady(storageProviderId, dosrId)
+      : this.base.api.tx.dataObjectStorageRegistry.unsetRelationshipReady(storageProviderId, dosrId)
+    return this.base.signAndSend(providerAccountId, tx)
   }
 
-  async getKnownContentIds() {
-    return this.base.api.query.dataDirectory.knownContentIds();
+  /*
+   * Returns array of know content ids
+   */
+  async getKnownContentIds () {
+    return this.base.api.query.dataDirectory.knownContentIds()
   }
 }
 
 module.exports = {
-  AssetsApi: AssetsApi,
+  AssetsApi
 }

+ 1 - 1
storage-node/packages/runtime-api/balances.js

@@ -20,7 +20,7 @@
 
 const debug = require('debug')('joystream:runtime:balances');
 
-const { IdentitiesApi } = require('@joystream/runtime-api/identities');
+const { IdentitiesApi } = require('@joystream/storage-runtime-api/identities');
 
 /*
  * Bundle API calls related to account balances.

+ 42 - 30
storage-node/packages/runtime-api/discovery.js

@@ -1,64 +1,76 @@
-'use strict';
+'use strict'
 
-const debug = require('debug')('joystream:runtime:discovery');
+const debug = require('debug')('joystream:runtime:discovery')
 
 /*
  * Add discovery related functionality to the substrate API.
  */
-class DiscoveryApi
-{
-  static async create(base)
-  {
-    const ret = new DiscoveryApi();
-    ret.base = base;
-    await ret.init();
-    return ret;
+class DiscoveryApi {
+  static async create (base) {
+    const ret = new DiscoveryApi()
+    ret.base = base
+    await ret.init()
+    return ret
   }
 
-  async init(account_file)
-  {
-    debug('Init');
+  async init () {
+    debug('Init')
   }
 
   /*
    * Get Bootstrap endpoints
    */
-  async getBootstrapEndpoints() {
+  async getBootstrapEndpoints () {
     return this.base.api.query.discovery.bootstrapEndpoints()
   }
 
   /*
-   * Get AccountInfo of an accountId
+   * Set Bootstrap endpoints, requires the sudo account to be provided and unlocked
    */
-  async getAccountInfo(accountId) {
-    const decoded = this.base.identities.keyring.decodeAddress(accountId, true)
-    const info = await this.base.api.query.discovery.accountInfoByAccountId(decoded)
+  async setBootstrapEndpoints (sudoAccount, endpoints) {
+    const tx = this.base.api.tx.discovery.setBootstrapEndpoints(endpoints)
+    // make sudo call
+    return this.base.signAndSend(
+      sudoAccount,
+      this.base.api.tx.sudo.sudo(tx)
+    )
+  }
+
+  /*
+   * Get AccountInfo of a storage provider
+   */
+  async getAccountInfo (storageProviderId) {
+    const info = await this.base.api.query.discovery.accountInfoByStorageProviderId(storageProviderId)
     // Not an Option so we use default value check to know if info was found
     return info.expires_at.eq(0) ? null : info
   }
 
   /*
-   * Set AccountInfo of an accountId
+   * Set AccountInfo of our storage provider
    */
-  async setAccountInfo(accountId, ipnsId, ttl) {
-    const isActor = await this.base.identities.isActor(accountId)
-    if (isActor) {
-      const tx = this.base.api.tx.discovery.setIpnsId(ipnsId, ttl)
-      return this.base.signAndSend(accountId, tx)
+  async setAccountInfo (ipnsId) {
+    const roleAccountId = this.base.identities.key.address
+    const storageProviderId = this.base.storageProviderId
+    const isProvider = await this.base.workers.isStorageProvider(storageProviderId)
+    if (isProvider) {
+      const tx = this.base.api.tx.discovery.setIpnsId(storageProviderId, ipnsId)
+      return this.base.signAndSend(roleAccountId, tx)
     } else {
-      throw new Error('Cannot set AccountInfo for non actor account')
+      throw new Error('Cannot set AccountInfo, id is not a storage provider')
     }
   }
 
   /*
-   * Clear AccountInfo of an accountId
+   * Clear AccountInfo of our storage provider
    */
-  async unsetAccountInfo(accountId) {
-    var tx = this.base.api.tx.discovery.unsetIpnsId()
-    return this.base.signAndSend(accountId, tx)
+  async unsetAccountInfo () {
+    const roleAccountId = this.base.identities.key.address
+    const storageProviderId = this.base.storageProviderId
+    var tx = this.base.api.tx.discovery.unsetIpnsId(storageProviderId)
+    return this.base.signAndSend(roleAccountId, tx)
   }
 }
 
 module.exports = {
-  DiscoveryApi: DiscoveryApi,
+  DiscoveryApi
 }

+ 117 - 116
storage-node/packages/runtime-api/identities.js

@@ -8,7 +8,7 @@
  * (at your option) any later version.
  *
  * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * but WITHOUT ANY WARRANTY without even the implied warranty of
  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  * GNU General Public License for more details.
  *
@@ -16,220 +16,221 @@
  * along with this program.  If not, see <https://www.gnu.org/licenses/>.
  */
 
-'use strict';
+'use strict'
 
-const path = require('path');
-const fs = require('fs');
-// const readline = require('readline');
+const path = require('path')
+const fs = require('fs')
+// const readline = require('readline')
 
-const debug = require('debug')('joystream:runtime:identities');
-
-const { Keyring } = require('@polkadot/keyring');
-// const { Null } = require('@polkadot/types/primitive');
-const util_crypto = require('@polkadot/util-crypto');
-
-// const { _ } = require('lodash');
+const debug = require('debug')('joystream:runtime:identities')
+const { Keyring } = require('@polkadot/keyring')
+const util_crypto = require('@polkadot/util-crypto')
 
 /*
  * Add identity management to the substrate API.
  *
  * This loosely groups: accounts, key management, and membership.
  */
-class IdentitiesApi
-{
-  static async create(base, {account_file, passphrase, canPromptForPassphrase})
-  {
-    const ret = new IdentitiesApi();
-    ret.base = base;
-    await ret.init(account_file, passphrase, canPromptForPassphrase);
-    return ret;
+class IdentitiesApi {
+  static async create (base, {account_file, passphrase, canPromptForPassphrase}) {
+    const ret = new IdentitiesApi()
+    ret.base = base
+    await ret.init(account_file, passphrase, canPromptForPassphrase)
+    return ret
   }
 
-  async init(account_file, passphrase, canPromptForPassphrase)
-  {
-    debug('Init');
+  async init (account_file, passphrase, canPromptForPassphrase) {
+    debug('Init')
 
     // Creatre keyring
-    this.keyring = new Keyring();
+    this.keyring = new Keyring()
 
-    this.canPromptForPassphrase = canPromptForPassphrase || false;
+    this.canPromptForPassphrase = canPromptForPassphrase || false
 
     // Load account file, if possible.
     try {
-      this.key = await this.loadUnlock(account_file, passphrase);
+      this.key = await this.loadUnlock(account_file, passphrase)
     } catch (err) {
-      debug('Error loading account file:', err.message);
+      debug('Error loading account file:', err.message)
     }
   }
 
   /*
    * Load a key file and unlock it if necessary.
    */
-  async loadUnlock(account_file, passphrase)
-  {
-    const fullname = path.resolve(account_file);
-    debug('Initializing key from', fullname);
-    const key = this.keyring.addFromJson(require(fullname));
-    await this.tryUnlock(key, passphrase);
-    debug('Successfully initialized with address', key.address);
-    return key;
+  async loadUnlock (account_file, passphrase) {
+    const fullname = path.resolve(account_file)
+    debug('Initializing key from', fullname)
+    const key = this.keyring.addFromJson(require(fullname))
+    await this.tryUnlock(key, passphrase)
+    debug('Successfully initialized with address', key.address)
+    return key
   }
 
   /*
    * Try to unlock a key if it isn't already unlocked.
    * passphrase should be supplied as argument.
    */
-  async tryUnlock(key, passphrase)
-  {
+  async tryUnlock (key, passphrase) {
     if (!key.isLocked) {
       debug('Key is not locked, not attempting to unlock')
-      return;
+      return
     }
 
     // First try with an empty passphrase - for convenience
     try {
-      key.decodePkcs8('');
+      key.decodePkcs8('')
 
       if (passphrase) {
-        debug('Key was not encrypted, supplied passphrase was ignored');
+        debug('Key was not encrypted, supplied passphrase was ignored')
       }
 
-      return;
+      return
     } catch (err) {
       // pass
     }
 
     // Then with supplied passphrase
     try {
-      debug('Decrypting with supplied passphrase');
-      key.decodePkcs8(passphrase);
-      return;
+      debug('Decrypting with supplied passphrase')
+      key.decodePkcs8(passphrase)
+      return
     } catch (err) {
       // pass
     }
 
     // If that didn't work, ask for a passphrase if appropriate
     if (this.canPromptForPassphrase) {
-      passphrase = await this.askForPassphrase(key.address);
-      key.decodePkcs8(passphrase);
+      passphrase = await this.askForPassphrase(key.address)
+      key.decodePkcs8(passphrase)
       return
     }
 
-    throw new Error('invalid passphrase supplied');
+    throw new Error('invalid passphrase supplied')
   }
 
   /*
    * Ask for a passphrase
    */
-  askForPassphrase(address)
-  {
+  askForPassphrase (address) {
     // Query for passphrase
-    const prompt = require('password-prompt');
-    return prompt(`Enter passphrase for ${address}: `, { required: false });
+    const prompt = require('password-prompt')
+    return prompt(`Enter passphrase for ${address}: `, { required: false })
   }
 
   /*
-   * Return true if the account is a member
+   * Return true if the account is a root account of a member
    */
-  async isMember(accountId)
-  {
-    const memberIds = await this.memberIdsOf(accountId); // return array of member ids
+  async isMember (accountId) {
+    const memberIds = await this.memberIdsOf(accountId) // return array of member ids
     return memberIds.length > 0 // true if at least one member id exists for the acccount
   }
 
   /*
-   * Return true if the account is an actor/role account
+   * Return all the member IDs of an account by the root account id
    */
-  async isActor(accountId)
-  {
-    const decoded = this.keyring.decodeAddress(accountId);
-    const actor = await this.base.api.query.actors.actorByAccountId(decoded)
-    return actor.isSome
+  async memberIdsOf (accountId) {
+    const decoded = this.keyring.decodeAddress(accountId)
+    return this.base.api.query.members.memberIdsByRootAccountId(decoded)
   }
 
   /*
-   * Return the member IDs of an account
+   * Return the first member ID of an account, or undefined if not a member root account.
    */
-  async memberIdsOf(accountId)
-  {
-    const decoded = this.keyring.decodeAddress(accountId);
-    return await this.base.api.query.members.memberIdsByRootAccountId(decoded);
+  async firstMemberIdOf (accountId) {
+    const decoded = this.keyring.decodeAddress(accountId)
+    let ids = await this.base.api.query.members.memberIdsByRootAccountId(decoded)
+    return ids[0]
   }
 
   /*
-   * Return the first member ID of an account, or undefined if not a member.
+   * Export a key pair to JSON. Will ask for a passphrase.
    */
-  async firstMemberIdOf(accountId)
-  {
-    const decoded = this.keyring.decodeAddress(accountId);
-    let ids = await this.base.api.query.members.memberIdsByRootAccountId(decoded);
-    return ids[0]
+  async exportKeyPair (accountId) {
+    const passphrase = await this.askForPassphrase(accountId)
+
+    // Produce JSON output
+    return this.keyring.toJson(accountId, passphrase)
   }
 
   /*
-   * Create a new key for the given role *name*. If no name is given,
-   * default to 'storage'.
+   * Export a key pair and write it to a JSON file with the account ID as the
+   * name.
    */
-  async createRoleKey(accountId, role)
-  {
-    role = role || 'storage';
-
-    // Generate new key pair
-    const keyPair = util_crypto.naclKeypairFromRandom();
-
-    // Encode to an address.
-    const addr = this.keyring.encodeAddress(keyPair.publicKey);
-    debug('Generated new key pair with address', addr);
+  async writeKeyPairExport (accountId, prefix) {
+    // Generate JSON
+    const data = await this.exportKeyPair(accountId)
 
-    // Add to key wring. We set the meta to identify the account as
-    // a role key.
-    const meta = {
-      name: `${role} role account for ${accountId}`,
-    };
+    // Write JSON
+    var filename = `${data.address}.json`
 
-    const createPair = require('@polkadot/keyring/pair').default;
-    const pair = createPair('ed25519', keyPair, meta);
+    if (prefix) {
+      const path = require('path')
+      filename = path.resolve(prefix, filename)
+    }
 
-    this.keyring.addPair(pair);
+    fs.writeFileSync(filename, JSON.stringify(data), {
+      encoding: 'utf8',
+      mode: 0o600
+    })
 
-    return pair;
+    return filename
   }
 
   /*
-   * Export a key pair to JSON. Will ask for a passphrase.
+   * Register account id with userInfo as a new member
+   * using default policy 0, returns new member id
    */
-  async exportKeyPair(accountId)
-  {
-    const passphrase = await this.askForPassphrase(accountId);
+  async registerMember (accountId, userInfo) {
+    const tx = this.base.api.tx.members.buyMembership(0, userInfo)
+
+    return this.base.signAndSendThenGetEventResult(accountId, tx, {
+      eventModule: 'members',
+      eventName: 'MemberRegistered',
+      eventProperty: 'MemberId'
+    })
+  }
 
-    // Produce JSON output
-    return this.keyring.toJson(accountId, passphrase);
+  /*
+   * Injects a keypair and sets it as the default identity
+   */
+  useKeyPair (keyPair) {
+    this.key = this.keyring.addPair(keyPair)
   }
 
   /*
-   * Export a key pair and write it to a JSON file with the account ID as the
-   * name.
+   * Create a new role key. If no name is given,
+   * default to 'storage'.
    */
-  async writeKeyPairExport(accountId, prefix)
-  {
-    // Generate JSON
-    const data = await this.exportKeyPair(accountId);
+  async createNewRoleKey (name) {
+    name = name || 'storage-provider'
 
-    // Write JSON
-    var filename = `${data.address}.json`;
-    if (prefix) {
-      const path = require('path');
-      filename = path.resolve(prefix, filename);
+    // Generate new key pair
+    const keyPair = util_crypto.naclKeypairFromRandom()
+
+    // Encode to an address.
+    const addr = this.keyring.encodeAddress(keyPair.publicKey)
+    debug('Generated new key pair with address', addr)
+
+    // Add to key wring. We set the meta to identify the account as
+    // a role key.
+    const meta = {
+      name: `${name} role account`
     }
-    fs.writeFileSync(filename, JSON.stringify(data), {
-      encoding: 'utf8',
-      mode: 0o600,
-    });
 
-    return filename;
+    const createPair = require('@polkadot/keyring/pair').default
+    const pair = createPair('ed25519', keyPair, meta)
+
+    this.keyring.addPair(pair)
+
+    return pair
+  }
+
+  getSudoAccount() {
+    return this.base.api.query.sudo.key()
   }
 }
 
 module.exports = {
-  IdentitiesApi: IdentitiesApi,
+  IdentitiesApi
 }

+ 128 - 118
storage-node/packages/runtime-api/index.js

@@ -8,7 +8,7 @@
  * (at your option) any later version.
  *
  * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * but WITHOUT ANY WARRANTY without even the implied warranty of
  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  * GNU General Public License for more details.
  *
@@ -16,70 +16,70 @@
  * along with this program.  If not, see <https://www.gnu.org/licenses/>.
  */
 
-'use strict';
+'use strict'
 
-const debug = require('debug')('joystream:runtime:base');
+const debug = require('debug')('joystream:runtime:base')
 
-const { registerJoystreamTypes } = require('@joystream/types');
-const { ApiPromise, WsProvider } = require('@polkadot/api');
+const { registerJoystreamTypes } = require('@joystream/types')
+const { ApiPromise, WsProvider } = require('@polkadot/api')
 
-const { IdentitiesApi } = require('@joystream/runtime-api/identities');
-const { BalancesApi } = require('@joystream/runtime-api/balances');
-const { RolesApi } = require('@joystream/runtime-api/roles');
-const { AssetsApi } = require('@joystream/runtime-api/assets');
-const { DiscoveryApi } = require('@joystream/runtime-api/discovery');
-const AsyncLock = require('async-lock');
+const { IdentitiesApi } = require('@joystream/storage-runtime-api/identities')
+const { BalancesApi } = require('@joystream/storage-runtime-api/balances')
+const { WorkersApi } = require('@joystream/storage-runtime-api/workers')
+const { AssetsApi } = require('@joystream/storage-runtime-api/assets')
+const { DiscoveryApi } = require('@joystream/storage-runtime-api/discovery')
+const AsyncLock = require('async-lock')
+const { newExternallyControlledPromise } = require('@joystream/storage-utils/externalPromise')
 
 /*
  * Initialize runtime (substrate) API and keyring.
  */
-class RuntimeApi
-{
-  static async create(options)
-  {
-    const runtime_api = new RuntimeApi();
-    await runtime_api.init(options || {});
-    return runtime_api;
+class RuntimeApi {
+  static async create (options) {
+    const runtime_api = new RuntimeApi()
+    await runtime_api.init(options || {})
+    return runtime_api
   }
 
-  async init(options)
-  {
-    debug('Init');
+  async init (options) {
+    debug('Init')
 
-    options = options || {};
+    options = options || {}
 
     // Register joystream types
-    registerJoystreamTypes();
+    registerJoystreamTypes()
 
-    const provider = new WsProvider(options.provider_url || 'ws://localhost:9944');
+    const provider = new WsProvider(options.provider_url || 'ws://localhost:9944')
 
     // Create the API instrance
-    this.api = await ApiPromise.create({ provider });
+    this.api = await ApiPromise.create({ provider })
 
-    this.asyncLock = new AsyncLock();
+    this.asyncLock = new AsyncLock()
 
     // Keep track locally of account nonces.
-    this.nonces = {};
+    this.nonces = {}
+
+    // The storage provider id to use
+    this.storageProviderId = parseInt(options.storageProviderId) // u64 instead ?
 
     // Ok, create individual APIs
     this.identities = await IdentitiesApi.create(this, {
       account_file: options.account_file,
       passphrase: options.passphrase,
       canPromptForPassphrase: options.canPromptForPassphrase
-    });
-    this.balances = await BalancesApi.create(this);
-    this.roles = await RolesApi.create(this);
-    this.assets = await AssetsApi.create(this);
-    this.discovery = await DiscoveryApi.create(this);
+    })
+    this.balances = await BalancesApi.create(this)
+    this.workers = await WorkersApi.create(this)
+    this.assets = await AssetsApi.create(this)
+    this.discovery = await DiscoveryApi.create(this)
   }
 
-  disconnect()
-  {
-    this.api.disconnect();
+  disconnect () {
+    this.api.disconnect()
   }
 
-  executeWithAccountLock(account_id, func) {
-    return this.asyncLock.acquire(`${account_id}`, func);
+  executeWithAccountLock (account_id, func) {
+    return this.asyncLock.acquire(`${account_id}`, func)
   }
 
   /*
@@ -89,47 +89,45 @@ class RuntimeApi
    * The result of the Promise is an array containing first the full event
    * name, and then the event fields as an object.
    */
-  async waitForEvent(module, name)
-  {
-    return this.waitForEvents([[module, name]]);
+  async waitForEvent (module, name) {
+    return this.waitForEvents([[module, name]])
   }
 
-  _matchingEvents(subscribed, events)
-  {
-    debug(`Number of events: ${events.length}; subscribed to ${subscribed}`);
+  _matchingEvents(subscribed, events) {
+    debug(`Number of events: ${events.length} subscribed to ${subscribed}`)
 
     const filtered = events.filter((record) => {
-      const { event, phase } = record;
+      const { event, phase } = record
 
       // Show what we are busy with
-      debug(`\t${event.section}:${event.method}:: (phase=${phase.toString()})`);
-      debug(`\t\t${event.meta.documentation.toString()}`);
+      debug(`\t${event.section}:${event.method}:: (phase=${phase.toString()})`)
+      debug(`\t\t${event.meta.documentation.toString()}`)
 
       // Skip events we're not interested in.
       const matching = subscribed.filter((value) => {
-        return event.section == value[0] && event.method == value[1];
-      });
-      return matching.length > 0;
-    });
-    debug(`Filtered: ${filtered.length}`);
+        return event.section === value[0] && event.method === value[1]
+      })
+      return matching.length > 0
+    })
+    debug(`Filtered: ${filtered.length}`)
 
     const mapped = filtered.map((record) => {
-      const { event } = record;
-      const types = event.typeDef;
+      const { event } = record
+      const types = event.typeDef
 
       // Loop through each of the parameters, displaying the type and data
-      const payload = {};
+      const payload = {}
       event.data.forEach((data, index) => {
-        debug(`\t\t\t${types[index].type}: ${data.toString()}`);
-        payload[types[index].type] = data;
-      });
+        debug(`\t\t\t${types[index].type}: ${data.toString()}`)
+        payload[types[index].type] = data
+      })
 
-      const full_name = `${event.section}.${event.method}`;
-      return [full_name, payload];
-    });
-    debug('Mapped', mapped);
+      const full_name = `${event.section}.${event.method}`
+      return [full_name, payload]
+    })
+    debug('Mapped', mapped)
 
-    return mapped;
+    return mapped
   }
 
   /*
@@ -139,16 +137,15 @@ class RuntimeApi
    *
    * Returns the first matched event *only*.
    */
-  async waitForEvents(subscribed)
-  {
+  async waitForEvents (subscribed) {
     return new Promise((resolve, reject) => {
       this.api.query.system.events((events) => {
-        const matches = this._matchingEvents(subscribed, events);
+        const matches = this._matchingEvents(subscribed, events)
         if (matches && matches.length) {
-          resolve(matches);
+          resolve(matches)
         }
-      });
-    });
+      })
+    })
   }
 
   /*
@@ -159,68 +156,68 @@ class RuntimeApi
    * If the subscribed events are given, and a callback as well, then the
    * callback is invoked with matching events.
    */
-  async signAndSend(accountId, tx, attempts, subscribed, callback)
-  {
-    // Prepare key
-    const from_key = this.identities.keyring.getPair(accountId);
+  async signAndSend (accountId, tx, attempts, subscribed, callback) {
+    accountId = this.identities.keyring.encodeAddress(accountId)
 
+    // Key must be unlocked
+    const from_key = this.identities.keyring.getPair(accountId)
     if (from_key.isLocked) {
-      throw new Error('Must unlock key before using it to sign!');
+      throw new Error('Must unlock key before using it to sign!')
     }
 
-    const finalizedPromise = newExternallyControlledPromise();
+    const finalizedPromise = newExternallyControlledPromise()
 
-    let unsubscribe = await this.executeWithAccountLock(accountId,  async () => {
+    let unsubscribe = await this.executeWithAccountLock(accountId, async () => {
       // Try to get the next nonce to use
-      let nonce = this.nonces[accountId];
+      let nonce = this.nonces[accountId]
 
       let incrementNonce = () => {
         // only increment once
-        incrementNonce = () => {}; // turn it into a no-op
-        nonce = nonce.addn(1);
-        this.nonces[accountId] = nonce;
+        incrementNonce = () => {} // turn it into a no-op
+        nonce = nonce.addn(1)
+        this.nonces[accountId] = nonce
       }
 
       // If the nonce isn't available, get it from chain.
       if (!nonce) {
         // current nonce
-        nonce = await this.api.query.system.accountNonce(accountId);
-        debug(`Got nonce for ${accountId} from chain: ${nonce}`);
+        nonce = await this.api.query.system.accountNonce(accountId)
+        debug(`Got nonce for ${accountId} from chain: ${nonce}`)
       }
 
       return new Promise((resolve, reject) => {
-        debug('Signing and sending tx');
+        debug('Signing and sending tx')
         // send(statusUpdates) returns a function for unsubscribing from status updates
         let unsubscribe = tx.sign(from_key, { nonce })
           .send(({events = [], status}) => {
-            debug(`TX status: ${status.type}`);
+            debug(`TX status: ${status.type}`)
 
             // Whatever events we get, process them if there's someone interested.
             // It is critical that this event handling doesn't prevent
             try {
               if (subscribed && callback) {
-                const matched = this._matchingEvents(subscribed, events);
-                debug('Matching events:', matched);
+                const matched = this._matchingEvents(subscribed, events)
+                debug('Matching events:', matched)
                 if (matched.length) {
-                  callback(matched);
+                  callback(matched)
                 }
               }
-            } catch(err) {
+            } catch (err) {
               debug(`Error handling events ${err.stack}`)
             }
 
             // We want to release lock as early as possible, sometimes Ready status
             // doesn't occur, so we do it on Broadcast instead
             if (status.isReady) {
-              debug('TX Ready.');
-              incrementNonce();
-              resolve(unsubscribe); //releases lock
+              debug('TX Ready.')
+              incrementNonce()
+              resolve(unsubscribe) // releases lock
             } else if (status.isBroadcast) {
-              debug('TX Broadcast.');
-              incrementNonce();
-              resolve(unsubscribe); //releases lock
+              debug('TX Broadcast.')
+              incrementNonce()
+              resolve(unsubscribe) // releases lock
             } else if (status.isFinalized) {
-              debug('TX Finalized.');
+              debug('TX Finalized.')
               finalizedPromise.resolve(status)
             } else if (status.isFuture) {
               // comes before ready.
@@ -228,10 +225,10 @@ class RuntimeApi
               // nonce was set in the future. Treating it as an error for now.
               debug('TX Future!')
               // nonce is likely out of sync, delete it so we reload it from chain on next attempt
-              delete this.nonces[accountId];
-              const err = new Error('transaction nonce set in future');
-              finalizedPromise.reject(err);
-              reject(err);
+              delete this.nonces[accountId]
+              const err = new Error('transaction nonce set in future')
+              finalizedPromise.reject(err)
+              reject(err)
             }
 
             /* why don't we see these status updates on local devchain (single node)
@@ -247,45 +244,58 @@ class RuntimeApi
             // Remember this can also happen if in the past we sent a tx with a future nonce, and the current nonce
             // now matches it.
             if (err) {
-              const errstr = err.toString();
+              const errstr = err.toString()
               // not the best way to check error code.
               // https://github.com/polkadot-js/api/blob/master/packages/rpc-provider/src/coder/index.ts#L52
               if (errstr.indexOf('Error: 1014:') < 0 && // low priority
                   errstr.indexOf('Error: 1010:') < 0) // bad transaction
               {
                 // Error but not nonce related. (bad arguments maybe)
-                debug('TX error', err);
+                debug('TX error', err)
               } else {
                 // nonce is likely out of sync, delete it so we reload it from chain on next attempt
-                delete this.nonces[accountId];
+                delete this.nonces[accountId]
               }
             }
 
-            finalizedPromise.reject(err);
+            finalizedPromise.reject(err)
             // releases lock
-            reject(err);
-          });
-      });
+            reject(err)
+          })
+      })
     })
 
     // when does it make sense to manyally unsubscribe?
     // at this point unsubscribe.then and unsubscribe.catch have been deleted
-    // unsubscribe(); // don't unsubscribe if we want to wait for additional status
+    // unsubscribe() // don't unsubscribe if we want to wait for additional status
     // updates to know when the tx has been finalized
-    return finalizedPromise.promise;
+    return finalizedPromise.promise
   }
+
+  /*
+   * Sign and send a transaction expect event from
+   * module and return eventProperty from the event.
+   */
+  async signAndSendThenGetEventResult (senderAccountId, tx, { eventModule, eventName, eventProperty }) {
+    // event from a module,
+    const subscribed = [[eventModule, eventName]]
+    return new Promise(async (resolve, reject) => {
+      try {
+        await this.signAndSend(senderAccountId, tx, 1, subscribed, (events) => {
+          events.forEach((event) => {
+            // fix - we may not necessarily want the first event
+            // if there are multiple events emitted,
+            resolve(event[1][eventProperty])
+          })
+        })
+      } catch (err) {
+        reject(err)
+      }
+    })
+  }
+
 }
 
 module.exports = {
-  RuntimeApi: RuntimeApi,
+  RuntimeApi
 }
-
-function newExternallyControlledPromise () {
-  // externally controller promise
-  let resolve, reject;
-  const promise = new Promise((res, rej) => {
-    resolve = res;
-    reject = rej;
-  });
-  return ({resolve, reject, promise});
-}

+ 3 - 2
storage-node/packages/runtime-api/package.json

@@ -1,5 +1,6 @@
 {
-  "name": "@joystream/runtime-api",
+  "name": "@joystream/storage-runtime-api",
+  "private": true,
   "version": "0.1.0",
   "description": "Runtime API abstraction for Joystream Storage Node",
   "author": "Joystream",
@@ -44,7 +45,7 @@
     "temp": "^0.9.0"
   },
   "dependencies": {
-    "@joystream/types": "^0.10.0",
+    "@joystream/types": "^0.11.0",
     "@polkadot/api": "^0.96.1",
     "async-lock": "^1.2.0",
     "lodash": "^4.17.11",

+ 0 - 186
storage-node/packages/runtime-api/roles.js

@@ -1,186 +0,0 @@
-/*
- * This file is part of the storage node for the Joystream project.
- * Copyright (C) 2019 Joystream Contributors
- *
- * This program is free software: you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 3 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <https://www.gnu.org/licenses/>.
- */
-
-'use strict';
-
-const debug = require('debug')('joystream:runtime:roles');
-
-const { Null, u64 } = require('@polkadot/types');
-
-const { _ } = require('lodash');
-
-/*
- * Add role related functionality to the substrate API.
- */
-class RolesApi
-{
-  static async create(base)
-  {
-    const ret = new RolesApi();
-    ret.base = base;
-    await ret.init();
-    return ret;
-  }
-
-  async init()
-  {
-    debug('Init');
-
-    // Constants
-    this.ROLE_STORAGE = 'StorageProvider'; // new u64(0x00);
-  }
-
-  /*
-   * Raises errors if the given account ID is not valid for staking as the given
-   * role. The role should be one of the ROLE_* constants above.
-   */
-  async checkAccountForStaking(accountId, role)
-  {
-    role = role || this.ROLE_STORAGE;
-
-    if (!await this.base.identities.isMember(accountId)) {
-      const msg = `Account with id "${accountId}" is not a member!`;
-      debug(msg);
-      throw new Error(msg);
-    }
-
-    if (!await this.hasBalanceForRoleStaking(accountId, role)) {
-      const msg = `Account with id "${accountId}" does not have sufficient free balance for role staking!`;
-      debug(msg);
-      throw new Error(msg);
-    }
-
-    debug(`Account with id "${accountId}" is a member with sufficient free balance, able to proceed.`);
-    return true;
-  }
-
-  /*
-   * Returns the required balance for staking for a role.
-   */
-  async requiredBalanceForRoleStaking(role)
-  {
-    const params = await this.base.api.query.actors.parameters(role);
-    if (params.isNone) {
-      throw new Error(`Role ${role} is not defined!`);
-    }
-    const result = params.raw.min_stake
-      .add(params.raw.entry_request_fee)
-      .add(await this.base.balances.baseTransactionFee());
-    return result;
-  }
-
-  /*
-   * Returns true/false if the given account has the balance required for
-   * staking for the given role.
-   */
-  async hasBalanceForRoleStaking(accountId, role)
-  {
-    const required = await this.requiredBalanceForRoleStaking(role);
-    return await this.base.balances.hasMinimumBalanceOf(accountId, required);
-  }
-
-  /*
-   * Transfer enough funds to allow the recipient to stake for the given role.
-   */
-  async transferForStaking(from, to, role)
-  {
-    const required = await this.requiredBalanceForRoleStaking(role);
-    return await this.base.balances.transfer(from, to, required);
-  }
-
-  /*
-   * Return current accounts holding a role.
-   */
-  async accountIdsByRole(role)
-  {
-    const ids = await this.base.api.query.actors.accountIdsByRole(role);
-    return ids.map(id => id.toString());
-  }
-
-  /*
-   * Returns the number of slots available for a role
-   */
-  async availableSlotsForRole(role)
-  {
-    let params = await this.base.api.query.actors.parameters(role);
-    if (params.isNone) {
-      throw new Error(`Role ${role} is not defined!`);
-    }
-    params = params.unwrap();
-    const slots = params.max_actors;
-    const active = await this.accountIdsByRole(role);
-    return (slots.subn(active.length)).toNumber();
-  }
-
-  /*
-   * Send a role application.
-   * - The role account must not be a member, but have sufficient funds for
-   *   staking.
-   * - The member account must be a member.
-   *
-   * After sending this application, the member account will have role request
-   * in the 'My Requests' tab of the app.
-   */
-  async applyForRole(roleAccountId, role, memberAccountId)
-  {
-    const memberId = await this.base.identities.firstMemberIdOf(memberAccountId);
-    if (memberId == undefined) {
-      throw new Error('Account is not a member!');
-    }
-
-    const tx = this.base.api.tx.actors.roleEntryRequest(role, memberId);
-    return await this.base.signAndSend(roleAccountId, tx);
-  }
-
-  /*
-   * Check whether the given role is occupying the given role.
-   */
-  async checkForRole(roleAccountId, role)
-  {
-    const actor = await this.base.api.query.actors.actorByAccountId(roleAccountId);
-    return !_.isEqual(actor.raw, new Null());
-  }
-
-  /*
-   * Same as checkForRole(), but if the account is not currently occupying the
-   * role, wait for the appropriate `actors.Staked` event to be emitted.
-   */
-  async waitForRole(roleAccountId, role)
-  {
-    if (await this.checkForRole(roleAccountId, role)) {
-      return true;
-    }
-
-    return new Promise((resolve, reject) => {
-      this.base.waitForEvent('actors', 'Staked').then((values) => {
-        const name = values[0][0];
-        const payload = values[0][1];
-
-        if (payload.AccountId == roleAccountId) {
-          resolve(true);
-        } else {
-          // reject() ?
-        }
-      });
-    });
-  }
-}
-
-module.exports = {
-  RolesApi: RolesApi,
-}

+ 1 - 2
storage-node/packages/runtime-api/test/assets.js

@@ -22,7 +22,7 @@ const mocha = require('mocha');
 const expect = require('chai').expect;
 const sinon = require('sinon');
 
-const { RuntimeApi } = require('@joystream/runtime-api');
+const { RuntimeApi } = require('@joystream/storage-runtime-api');
 
 describe('Assets', () => {
   var api;
@@ -47,6 +47,5 @@ describe('Assets', () => {
   it('can accept content');
   it('can reject content');
   it('can create a storage relationship for content');
-  it('can create a storage relationship for content and return it');
   it('can toggle a storage relatsionship to ready state');
 });

+ 1 - 4
storage-node/packages/runtime-api/test/balances.js

@@ -22,7 +22,7 @@ const mocha = require('mocha');
 const expect = require('chai').expect;
 const sinon = require('sinon');
 
-const { RuntimeApi } = require('@joystream/runtime-api');
+const { RuntimeApi } = require('@joystream/storage-runtime-api');
 
 describe('Balances', () => {
   var api;
@@ -49,7 +49,4 @@ describe('Balances', () => {
     // >= 0 comparison works
     expect(fee.cmpn(0)).to.be.at.least(0);
   });
-
-  // TODO implemtable only with accounts with balance
-  it('can transfer funds');
 });

+ 1 - 8
storage-node/packages/runtime-api/test/identities.js

@@ -23,7 +23,7 @@ const expect = require('chai').expect;
 const sinon = require('sinon');
 const temp = require('temp').track();
 
-const { RuntimeApi } = require('@joystream/runtime-api');
+const { RuntimeApi } = require('@joystream/storage-runtime-api');
 
 describe('Identities', () => {
   var api;
@@ -31,13 +31,6 @@ describe('Identities', () => {
     api = await RuntimeApi.create({ canPromptForPassphrase: true });
   });
 
-  it('creates role keys', async () => {
-    const key = await api.identities.createRoleKey('foo', 'bar');
-    expect(key).to.have.property('type', 'ed25519');
-    expect(key.meta.name).to.include('foo');
-    expect(key.meta.name).to.include('bar');
-  });
-
   it('imports keys', async () => {
     // Unlocked keys can be imported without asking for a passphrase
     await api.identities.loadUnlock('test/data/edwards_unlocked.json');

+ 1 - 1
storage-node/packages/runtime-api/test/index.js

@@ -21,7 +21,7 @@
 const mocha = require('mocha');
 const expect = require('chai').expect;
 
-const { RuntimeApi } = require('@joystream/runtime-api');
+const { RuntimeApi } = require('@joystream/storage-runtime-api');
 
 describe('RuntimeApi', () => {
   it('can be created', async () => {

+ 0 - 67
storage-node/packages/runtime-api/test/roles.js

@@ -1,67 +0,0 @@
-/*
- * This file is part of the storage node for the Joystream project.
- * Copyright (C) 2019 Joystream Contributors
- *
- * This program is free software: you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation, either version 3 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <https://www.gnu.org/licenses/>.
- */
-
-'use strict';
-
-const mocha = require('mocha');
-const expect = require('chai').expect;
-const sinon = require('sinon');
-
-const { RuntimeApi } = require('@joystream/runtime-api');
-
-describe('Roles', () => {
-  var api;
-  var key;
-  before(async () => {
-    api = await RuntimeApi.create();
-    key = await api.identities.loadUnlock('test/data/edwards_unlocked.json');
-  });
-
-  it('returns the required balance for role staking', async () => {
-    const amount = await api.roles.requiredBalanceForRoleStaking(api.roles.ROLE_STORAGE);
-
-    // Effectively checks that the role is at least defined.
-    expect(amount.cmpn(0)).to.be.above(0);
-  });
-
-  it('returns whether an account has funds for role staking', async () => {
-    expect(await api.roles.hasBalanceForRoleStaking(key.address, api.roles.ROLE_STORAGE)).to.be.false;
-  });
-
-  it('returns accounts for a role', async () => {
-    const accounts = await api.roles.accountIdsByRole(api.roles.ROLE_STORAGE);
-    // The chain may have accounts configured, so go for the bare minimum in
-    // expectations.
-    expect(accounts).to.have.lengthOf.above(-1);
-  });
-
-  it('can check whether an account fulfils requirements for role staking', async () => {
-    expect(async _ => {
-      await api.roles.checkAccountForRoleStaking(key.address, api.roles.ROLE_STORAGE);
-    }).to.throw;
-  });
-
-  it('can check for an account to have a role', async () => {
-    expect(await api.roles.checkForRole(key.address, api.roles.ROLE_STORAGE)).to.be.false;
-  });
-
-  // TODO requires complex setup, and may change in the near future.
-  it('transfers funds for staking');
-  it('can apply for a role');
-  it('can wait for an account to have a role');
-});

+ 298 - 0
storage-node/packages/runtime-api/workers.js

@@ -0,0 +1,298 @@
+/*
+ * This file is part of the storage node for the Joystream project.
+ * Copyright (C) 2019 Joystream Contributors
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <https://www.gnu.org/licenses/>.
+ */
+
+'use strict'
+
+const debug = require('debug')('joystream:runtime:roles')
+const BN = require('bn.js')
+const { Worker } = require('@joystream/types/working-group')
+
+/*
+ * Add worker related functionality to the substrate API.
+ */
+class WorkersApi {
+  static async create (base) {
+    const ret = new WorkersApi()
+    ret.base = base
+    await ret.init()
+    return ret
+  }
+
+
+  // eslint-disable-next-line class-methods-use-this, require-await
+  async init () {
+    debug('Init')
+  }
+
+  /*
+   * Check whether the given account and id represent an enrolled storage provider
+   */
+  async isRoleAccountOfStorageProvider (storageProviderId, roleAccountId) {
+    const id = new BN(storageProviderId)
+    const roleAccount = this.base.identities.keyring.decodeAddress(roleAccountId)
+    const providerAccount = await this.storageProviderRoleAccount(id)
+    return providerAccount && providerAccount.eq(roleAccount)
+  }
+
+  /*
+   * Returns true if the provider id is enrolled
+   */
+  async isStorageProvider (storageProviderId) {
+    const worker = await this.storageWorkerByProviderId(storageProviderId)
+    return worker !== null
+  }
+
+  /*
+   * Returns a provider's role account or null if provider doesn't exist
+   */
+  async storageProviderRoleAccount (storageProviderId) {
+    const worker = await this.storageWorkerByProviderId(storageProviderId)
+    return worker ? worker.role_account_id : null
+  }
+
+  /*
+   * Returns a Worker instance or null if provider does not exist
+   */
+  async storageWorkerByProviderId (storageProviderId) {
+    const id = new BN(storageProviderId)
+    const { providers } = await this.getAllProviders()
+    return providers[id.toNumber()] || null
+  }
+
+  /*
+   * Returns the the first found provider id with a role account or null if not found
+   */
+  async findProviderIdByRoleAccount (roleAccount) {
+    const { ids, providers } = await this.getAllProviders()
+
+    for (let i = 0; i < ids.length; i++) {
+      const id = ids[i]
+      if (providers[id].role_account_id.eq(roleAccount)) {
+        return id
+      }
+    }
+
+    return null
+  }
+
+  /*
+   * Returns the set of ids and Worker instances of providers enrolled on the network
+   */
+  async getAllProviders () {
+    // const workerEntries = await this.base.api.query.storageWorkingGroup.workerById()
+    // can't rely on .isEmpty or isNone property to detect empty map
+    // return workerEntries.isNone ? [] : workerEntries[0]
+    // return workerEntries.isEmpty ? [] : workerEntries[0]
+    // So we iterate over possible ids which may or may not exist, by reading directly
+    // from storage value
+    const nextWorkerId = (await this.base.api.query.storageWorkingGroup.nextWorkerId()).toNumber()
+    const ids = []
+    const providers = {}
+    for (let id = 0; id < nextWorkerId; id++) {
+      // We get back an Option. Will be None if value doesn't exist
+      // eslint-disable-next-line no-await-in-loop
+      let value = await this.base.api.rpc.state.getStorage(
+        this.base.api.query.storageWorkingGroup.workerById.key(id)
+      )
+
+      if (!value.isNone) {
+        // no need to read from storage again!
+        // const worker = (await this.base.api.query.storageWorkingGroup.workerById(id))[0]
+        value = value.unwrap()
+        // construct the Worker type from raw data
+        // const worker = createType('WorkerOf', value)
+        // const worker = new Worker(value)
+        ids.push(id)
+        providers[id] = new Worker(value)
+      }
+    }
+
+    return { ids, providers }
+  }
+
+  async getLeadRoleAccount() {
+    const currentLead = await this.base.api.query.storageWorkingGroup.currentLead()
+    if (currentLead.isSome) {
+      const leadWorkerId = currentLead.unwrap()
+      const worker = await this.base.api.query.storageWorkingGroup.workerById(leadWorkerId)
+      return worker[0].role_account_id
+    }
+    return null
+  }
+
+  // Helper methods below don't really belong in the colossus runtime api library.
+  // They are only used by the dev-init command in the cli to setup a development environment
+
+  /*
+   * Add a new storage group opening using the lead account. Returns the
+   * new opening id.
+   */
+  async dev_addStorageOpening() {
+    const openTx = this.dev_makeAddOpeningTx('Worker')
+    return this.dev_submitAddOpeningTx(openTx, await this.getLeadRoleAccount())
+  }
+
+  /*
+   * Add a new storage working group lead opening using sudo account. Returns the
+   * new opening id.
+   */
+  async dev_addStorageLeadOpening() {
+    const openTx = this.dev_makeAddOpeningTx('Leader')
+    const sudoTx = this.base.api.tx.sudo.sudo(openTx)
+    return this.dev_submitAddOpeningTx(sudoTx, await this.base.identities.getSudoAccount())
+  }
+
+  /*
+   * Constructs an addOpening tx of openingType
+   */
+  dev_makeAddOpeningTx(openingType) {
+    return this.base.api.tx.storageWorkingGroup.addOpening(
+      'CurrentBlock',
+      {
+        application_rationing_policy: {
+          'max_active_applicants': 1
+        },
+        max_review_period_length: 1000
+        // default values for everything else..
+      },
+      'dev-opening',
+      openingType
+    )
+  }
+
+  /*
+   * Submits a tx (expecting it to dispatch storageWorkingGroup.addOpening) and returns
+   * the OpeningId from the resulting event.
+   */
+  async dev_submitAddOpeningTx(tx, senderAccount) {
+    return this.base.signAndSendThenGetEventResult(senderAccount, tx, {
+      eventModule: 'storageWorkingGroup',
+      eventName: 'OpeningAdded',
+      eventProperty: 'OpeningId'
+    })
+  }
+
+  /*
+   * Apply on an opening, returns the application id.
+   */
+  async dev_applyOnOpening(openingId, memberId, memberAccount, roleAccount) {
+    const applyTx = this.base.api.tx.storageWorkingGroup.applyOnOpening(
+      memberId, openingId, roleAccount, null, null, `colossus-${memberId}`
+    )
+
+    return this.base.signAndSendThenGetEventResult(memberAccount, applyTx, {
+      eventModule: 'storageWorkingGroup',
+      eventName: 'AppliedOnOpening',
+      eventProperty: 'ApplicationId'
+    })
+  }
+
+  /*
+   * Move lead opening to review state using sudo account
+   */
+  async dev_beginLeadOpeningReview(openingId) {
+    const beginReviewTx = this.dev_makeBeginOpeningReviewTx(openingId)
+    const sudoTx = this.base.api.tx.sudo.sudo(beginReviewTx)
+    return this.base.signAndSend(await this.base.identities.getSudoAccount(), sudoTx)
+  }
+
+  /*
+   * Move a storage opening to review state using lead account
+   */
+  async dev_beginStorageOpeningReview(openingId) {
+    const beginReviewTx = this.dev_makeBeginOpeningReviewTx(openingId)
+    return this.base.signAndSend(await this.getLeadRoleAccount(), beginReviewTx)
+  }
+
+  /*
+   * Constructs a beingApplicantReview tx for openingId, which puts an opening into the review state
+   */
+  dev_makeBeginOpeningReviewTx(openingId) {
+    return this.base.api.tx.storageWorkingGroup.beginApplicantReview(openingId)
+  }
+
+  /*
+   * Fill a lead opening, return the assigned worker id, using the sudo account
+   */
+  async dev_fillLeadOpening(openingId, applicationId) {
+    const fillTx = this.dev_makeFillOpeningTx(openingId, applicationId)
+    const sudoTx = this.base.api.tx.sudo.sudo(fillTx)
+    const filled = await this.dev_submitFillOpeningTx(
+      await this.base.identities.getSudoAccount(), sudoTx)
+    return getWorkerIdFromApplicationIdToWorkerIdMap(filled, applicationId)
+  }
+
+  /*
+   * Fill a storage opening, return the assigned worker id, using the lead account
+   */
+  async dev_fillStorageOpening(openingId, applicationId) {
+    const fillTx = this.dev_makeFillOpeningTx(openingId, applicationId)
+    const filled = await this.dev_submitFillOpeningTx(await this.getLeadRoleAccount(), fillTx)
+    return getWorkerIdFromApplicationIdToWorkerIdMap(filled, applicationId)
+  }
+
+  /*
+   * Constructs a FillOpening transaction
+   */
+  dev_makeFillOpeningTx(openingId, applicationId) {
+    return this.base.api.tx.storageWorkingGroup.fillOpening(openingId, [applicationId], null)
+  }
+
+  /*
+   * Dispatches a fill opening tx and returns a map of the application id to their new assigned worker ids.
+   */
+  async dev_submitFillOpeningTx(senderAccount, tx) {
+    return this.base.signAndSendThenGetEventResult(senderAccount, tx, {
+      eventModule: 'storageWorkingGroup',
+      eventName: 'OpeningFilled',
+      eventProperty: 'ApplicationIdToWorkerIdMap'
+    })
+  }
+}
+
+/*
+ * Finds assigned worker id corresponding to the application id from the resulting
+ * ApplicationIdToWorkerIdMap map in the OpeningFilled event. Expects map to
+ * contain at least one entry.
+ */
+function getWorkerIdFromApplicationIdToWorkerIdMap (filledMap, applicationId) {
+  if (filledMap.size === 0) {
+    throw new Error('Expected opening to be filled!')
+  }
+
+  let ourApplicationIdKey
+
+  for (let key of filledMap.keys()) {
+    if (key.eq(applicationId)) {
+      ourApplicationIdKey = key
+      break
+    }
+  }
+
+  if (!ourApplicationIdKey) {
+    throw new Error('Expected application id to have been filled!')
+  }
+
+  const workerId = filledMap.get(ourApplicationIdKey)
+
+  return workerId
+}
+
+module.exports = {
+  WorkersApi
+}

+ 0 - 3
storage-node/packages/storage/README.md

@@ -2,9 +2,6 @@
 
 This package contains an abstraction over the storage backend of colossus.
 
-Its main purpose is to allow testing the storage subsystem without having to
-run a blockchain node.
-
 In the current version, the storage is backed by IPFS. In order to run tests,
 you have to also run an [IPFS node](https://dist.ipfs.io/#go-ipfs).
 

+ 2 - 1
storage-node/packages/storage/package.json

@@ -1,5 +1,6 @@
 {
-  "name": "@joystream/storage",
+  "name": "@joystream/storage-node-backend",
+  "private": true,
   "version": "0.1.0",
   "description": "Storage management code for Joystream Storage Node",
   "author": "Joystream",

+ 3 - 0
storage-node/packages/storage/storage.js

@@ -383,6 +383,8 @@ class Storage
   {
     const resolved = await this._resolve_content_id_with_timeout(this._timeout, content_id);
 
+    // validate resolved id is proper ipfs_cid, not null or empty string
+
     if (this.pins[resolved]) {
       return;
     }
@@ -396,6 +398,7 @@ class Storage
         delete this.pins[resolved];
       } else {
         debug(`Pinned ${resolved}`);
+        // why aren't we doing this.pins[resolved] = true
       }
     });
   }

+ 48 - 55
storage-node/packages/storage/test/storage.js

@@ -26,7 +26,7 @@ const expect = chai.expect;
 
 const fs = require('fs');
 
-const { Storage } = require('@joystream/storage');
+const { Storage } = require('@joystream/storage-node-backend');
 
 const IPFS_CID_REGEX = /^Qm[1-9A-HJ-NP-Za-km-z]{44}$/;
 
@@ -40,28 +40,27 @@ function write(store, content_id, contents, callback)
       });
       stream.on('committed', callback);
 
-      stream.write(contents);
-      stream.end();
+      if (!stream.write(contents)) {
+        stream.once('drain', () => stream.end())
+      } else {
+        process.nextTick(() => stream.end())
+      }
     })
     .catch((err) => {
       expect.fail(err);
     });
 }
 
-function read_all(stream)
-{
-  const chunks = []
-  let chunk
-  do {
-    chunk = stream.read();
-    if (chunk) {
-        chunks.push(chunk)
-    }
-  } while (chunk);
-  return Buffer.concat(chunks);
+function read_all (stream) {
+  return new Promise((resolve, reject) => {
+    const chunks = []
+    stream.on('data', chunk => chunks.push(chunk))
+    stream.on('end', () => resolve(Buffer.concat(chunks)))
+    stream.on('error', err => reject(err))
+    stream.resume()
+  })
 }
 
-
 function create_known_object(content_id, contents, callback)
 {
   var hash;
@@ -96,45 +95,43 @@ describe('storage/storage', () => {
 
     it('detects the MIME type of a write stream', (done) => {
       const contents = fs.readFileSync('../../storage-node_new.svg');
+      storage.open('mime-test', 'w')
+        .then((stream) => {
+          var file_info;
+          stream.on('file_info', (info) => {
+            // Could filter & abort here now, but we're just going to set this,
+            // and expect it to be set later...
+            file_info = info;
+          });
 
-      create_known_object('foobar', contents, (store, hash) => {
-        var file_info;
-        store.open('mime-test', 'w')
-          .then((stream) => {
-
-            stream.on('file_info', (info) => {
-              // Could filter & abort here now, but we're just going to set this,
-              // and expect it to be set later...
-              file_info = info;
-            });
-
-            stream.on('finish', () => {
-              stream.commit();
-            });
-            stream.on('committed', (hash) => {
-              // ... if file_info is not set here, there's an issue.
-              expect(file_info).to.have.property('mime_type', 'application/xml');
-              expect(file_info).to.have.property('ext', 'xml');
-
-              done();
-            });
-
-            stream.write(contents);
-            stream.end();
-          })
-          .catch((err) => {
-            expect.fail(err);
+          stream.on('finish', () => {
+            stream.commit();
+          });
+
+          stream.on('committed', (hash) => {
+            // ... if file_info is not set here, there's an issue.
+            expect(file_info).to.have.property('mime_type', 'application/xml');
+            expect(file_info).to.have.property('ext', 'xml');
+            done();
           });
-      });
 
+          if (!stream.write(contents)) {
+            stream.once('drain', () => stream.end())
+          } else {
+            process.nextTick(() => stream.end())
+          }
+        })
+        .catch((err) => {
+          expect.fail(err);
+        });
     });
 
     it('can read a stream', (done) => {
       const contents = 'test-for-reading';
       create_known_object('foobar', contents, (store, hash) => {
         store.open('foobar', 'r')
-          .then((stream) => {
-            const data = read_all(stream);
+          .then(async (stream) => {
+            const data = await read_all(stream);
             expect(Buffer.compare(data, Buffer.from(contents))).to.equal(0);
             done();
           })
@@ -144,16 +141,12 @@ describe('storage/storage', () => {
       });
     });
 
-    // Problems with this test. reading the stream is stalling, so we are
-    // not always able to read the full stream for the test to make sense
-    // Disabling for now. Look at readl_all() implementation.. maybe that
-    // is where the fault is?
-    xit('detects the MIME type of a read stream', (done) => {
+    it('detects the MIME type of a read stream', (done) => {
       const contents = fs.readFileSync('../../storage-node_new.svg');
       create_known_object('foobar', contents, (store, hash) => {
         store.open('foobar', 'r')
-          .then((stream) => {
-            const data = read_all(stream);
+          .then(async (stream) => {
+            const data = await read_all(stream);
             expect(contents.length).to.equal(data.length);
             expect(Buffer.compare(data, contents)).to.equal(0);
             expect(stream).to.have.property('file_info');
@@ -173,8 +166,8 @@ describe('storage/storage', () => {
       const contents = 'test-for-reading';
       create_known_object('foobar', contents, (store, hash) => {
         store.open('foobar', 'r')
-          .then((stream) => {
-            const data = read_all(stream);
+          .then(async (stream) => {
+            const data = await read_all(stream);
             expect(Buffer.compare(data, Buffer.from(contents))).to.equal(0);
 
             expect(stream.file_info).to.have.property('mime_type', 'application/octet-stream');
@@ -203,7 +196,7 @@ describe('storage/storage', () => {
     it('returns stats for a known object', (done) => {
       const content = 'stat-test';
       const expected_size = content.length;
-      create_known_object('foobar', 'stat-test', (store, hash) => {
+      create_known_object('foobar', content, (store, hash) => {
         expect(store.stat(hash)).to.eventually.have.property('size', expected_size);
         done();
       });

+ 0 - 0
storage-node/packages/storage/test/template/bar


+ 0 - 0
storage-node/packages/storage/test/template/foo/baz


+ 0 - 1
storage-node/packages/storage/test/template/quux

@@ -1 +0,0 @@
-foo/baz

+ 19 - 0
storage-node/packages/util/externalPromise.js

@@ -0,0 +1,19 @@
+/**
+ * Returns an object that contains a Promise and exposes its handlers, ie. resolve and reject methods
+ * so it can be fulfilled 'externally'. This is a bit of a hack, but most useful application is when
+ * concurrent async operations are initiated that are all waiting on the same result value.
+ */
+function newExternallyControlledPromise () {
+    let resolve, reject
+
+    const promise = new Promise((res, rej) => {
+      resolve = res
+      reject = rej
+    })
+
+    return ({ resolve, reject, promise })
+}
+
+module.exports = {
+    newExternallyControlledPromise
+}

+ 2 - 1
storage-node/packages/util/package.json

@@ -1,5 +1,6 @@
 {
-  "name": "@joystream/util",
+  "name": "@joystream/storage-utils",
+  "private": true,
   "version": "0.1.0",
   "description": "Utility code for Joystream Storage Node",
   "author": "Joystream",

+ 1 - 1
storage-node/packages/util/test/fs/resolve.js

@@ -22,7 +22,7 @@ const mocha = require('mocha');
 const expect = require('chai').expect;
 const path = require('path');
 
-const resolve = require('@joystream/util/fs/resolve');
+const resolve = require('@joystream/storage-utils/fs/resolve');
 
 function tests(base)
 {

+ 1 - 1
storage-node/packages/util/test/fs/walk.js

@@ -25,7 +25,7 @@ const temp = require('temp').track();
 const fs = require('fs');
 const path = require('path');
 
-const fswalk = require('@joystream/util/fs/walk');
+const fswalk = require('@joystream/storage-utils/fs/walk');
 
 function walktest(archive, base, done)
 {

+ 1 - 1
storage-node/packages/util/test/lru.js

@@ -21,7 +21,7 @@
 const mocha = require('mocha');
 const expect = require('chai').expect;
 
-const lru = require('@joystream/util/lru');
+const lru = require('@joystream/storage-utils/lru');
 
 const DEFAULT_SLEEP = 1;
 function sleep(ms = DEFAULT_SLEEP)

+ 1 - 1
storage-node/packages/util/test/pagination.js

@@ -22,7 +22,7 @@ const mocha = require('mocha');
 const expect = require('chai').expect;
 const mock_http = require('node-mocks-http');
 
-const pagination = require('@joystream/util/pagination');
+const pagination = require('@joystream/storage-utils/pagination');
 
 describe('util/pagination', function()
 {

+ 1 - 1
storage-node/packages/util/test/ranges.js

@@ -23,7 +23,7 @@ const expect = require('chai').expect;
 const mock_http = require('node-mocks-http');
 const stream_buffers = require('stream-buffers');
 
-const ranges = require('@joystream/util/ranges');
+const ranges = require('@joystream/storage-utils/ranges');
 
 describe('util/ranges', function()
 {

+ 10 - 6
storage-node/scripts/compose/devchain-and-ipfs-node/docker-compose.yaml

@@ -3,14 +3,18 @@ services:
   ipfs:
     image: ipfs/go-ipfs:latest
     ports:
-      - "5001:5001"
+      - "127.0.0.1:5001:5001"
     volumes:
-      - storage-node-shared-data:/data/ipfs
+      - ipfs-data:/data/ipfs
   chain:
-    image: joystream/node:2.1.2
+    image: joystream/node:latest
     ports:
-      - "9944:9944"
-    command: --dev --ws-external
+      - "127.0.0.1:9944:9944"
+    volumes:
+      - chain-data:/data
+    command: --dev --ws-external --base-path /data
 volumes:
-  storage-node-shared-data:
+  ipfs-data:
+    driver: local
+  chain-data:
     driver: local

+ 39 - 0
storage-node/scripts/run-dev-instance.sh

@@ -0,0 +1,39 @@
+#!/usr/bin/env bash
+set -e
+
+# Avoid pulling joystream/node from docker hub. It is most likely
+# not the version that we want to work with. Either you should
+# build it locally or pull it down manually if you that is what you want
+if ! docker inspect joystream/node:latest > /dev/null 2>&1;
+then
+  echo "Didn't find local joystream/node:latest docker image."
+  exit 1
+fi
+
+SCRIPT_PATH="$(dirname "${BASH_SOURCE[0]}")"
+
+# stop prior run and clear volumes
+docker-compose -f ${SCRIPT_PATH}/compose/devchain-and-ipfs-node/docker-compose.yaml down -v
+
+# Run a development joystream-node chain and ipfs daemon in the background
+# Will use latest joystream/node images,
+# and will fetch from dockerhub if not found, so build them locally if
+# you need the version from the current branch
+docker-compose -f ${SCRIPT_PATH}/compose/devchain-and-ipfs-node/docker-compose.yaml up -d
+
+# configure the dev chain
+DEBUG=joystream:storage-cli:dev yarn storage-cli dev-init
+
+# Run the tests
+yarn workspace storage-node test
+
+# Run the server in background
+# DEBUG=joystream:storage* yarn colossus --dev > ${SCRIPT_PATH}/colossus.log 2>&1 &
+# PID= ???
+# echo "Development storage node is running in the background process id: ${PID}""
+# prompt for pressing ctrl-c..
+# kill colossus and docker containers...
+# docker-compose -f ${SCRIPT_PATH}/compose/devchain-and-ipfs-node/docker-compose.yaml down -v
+
+# Run the server
+DEBUG=joystream:* yarn colossus --dev

+ 7 - 0
storage-node/scripts/stop-dev-instance.sh

@@ -0,0 +1,7 @@
+#!/usr/bin/env bash
+set -e
+
+script_path="$(dirname "${BASH_SOURCE[0]}")"
+
+# stop prior run and clear volumes
+docker-compose -f ${script_path}/compose/devchain-and-ipfs-node/docker-compose.yaml down -v

+ 119 - 13
yarn.lock

@@ -1358,11 +1358,13 @@
 "@constantinople/types@./types", "@joystream/types@./types":
   version "0.11.0"
   dependencies:
+    "@polkadot/keyring" "^1.7.0-beta.5"
     "@polkadot/types" "^0.96.1"
     "@types/lodash" "^4.14.157"
     "@types/vfile" "^4.0.0"
     ajv "^6.11.0"
     lodash "^4.17.15"
+    moment "^2.24.0"
 
 "@csstools/convert-colors@^1.4.0":
   version "1.4.0"
@@ -1735,16 +1737,6 @@
     "@types/istanbul-reports" "^1.1.1"
     "@types/yargs" "^13.0.0"
 
-"@joystream/types@^0.10.0":
-  version "0.10.0"
-  resolved "https://registry.yarnpkg.com/@joystream/types/-/types-0.10.0.tgz#7e98ef221410b26a7d952cfc3d1c03d28395ad69"
-  integrity sha512-RDZizqGKWGYpLR5PnUWM4aGa7InpWNh2Txlr7Al3ROFYOHoyQf62/omPfEz29F6scwlFxysOdmEfQaLeVRaUxA==
-  dependencies:
-    "@polkadot/types" "^0.96.1"
-    "@types/vfile" "^4.0.0"
-    ajv "^6.11.0"
-    lodash "^4.17.15"
-
 "@ledgerhq/devices@^4.78.0":
   version "4.78.0"
   resolved "https://registry.yarnpkg.com/@ledgerhq/devices/-/devices-4.78.0.tgz#149b572f0616096e2bd5eb14ce14d0061c432be6"
@@ -3842,6 +3834,11 @@
   resolved "https://registry.yarnpkg.com/@types/minimatch/-/minimatch-3.0.3.tgz#3dca0e3f33b200fc7d1139c0cd96c1268cadfd9d"
   integrity sha512-tHq6qdbT9U1IRSGf14CL0pUlULksvY9OZ+5eEgl1N7t+OA3tGvNpxJCzuKQlsNgCVwbAs670L1vcVQi8j9HjnA==
 
+"@types/minimist@^1.2.0":
+  version "1.2.0"
+  resolved "https://registry.yarnpkg.com/@types/minimist/-/minimist-1.2.0.tgz#69a23a3ad29caf0097f06eda59b361ee2f0639f6"
+  integrity sha1-aaI6OtKcrwCX8G7aWbNh7i8GOfY=
+
 "@types/mkdirp@^0.5.2":
   version "0.5.2"
   resolved "https://registry.yarnpkg.com/@types/mkdirp/-/mkdirp-0.5.2.tgz#503aacfe5cc2703d5484326b1b27efa67a339c1f"
@@ -6584,6 +6581,15 @@ camelcase-keys@^4.0.0:
     map-obj "^2.0.0"
     quick-lru "^1.0.0"
 
+camelcase-keys@^6.2.2:
+  version "6.2.2"
+  resolved "https://registry.yarnpkg.com/camelcase-keys/-/camelcase-keys-6.2.2.tgz#5e755d6ba51aa223ec7d3d52f25778210f9dc3c0"
+  integrity sha512-YrwaA0vEKazPBkn0ipTiMpSajYDSe+KjQfrjhcBMxJt/znbvlHd8Pw/Vamaz5EB4Wfhs3SUR3Z9mwRu/P3s3Yg==
+  dependencies:
+    camelcase "^5.3.1"
+    map-obj "^4.0.0"
+    quick-lru "^4.0.1"
+
 camelcase@^2.0.0:
   version "2.1.1"
   resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-2.1.1.tgz#7c1d16d679a1bbe59ca02cacecfb011e201f5a1f"
@@ -6604,6 +6610,11 @@ camelcase@^5.0.0, camelcase@^5.2.0, camelcase@^5.3.1:
   resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-5.3.1.tgz#e3c9b31569e106811df242f715725a1f4c494320"
   integrity sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==
 
+camelcase@^6.0.0:
+  version "6.0.0"
+  resolved "https://registry.yarnpkg.com/camelcase/-/camelcase-6.0.0.tgz#5259f7c30e35e278f1bdc2a4d91230b37cad981e"
+  integrity sha512-8KMDF1Vz2gzOq54ONPJS65IvTUaB1cHJ2DMM7MbPmLZljDH1qpzzLsWdiN9pHh6qvkRVDTi/07+eNGch/oLU4w==
+
 camelize@^1.0.0:
   version "1.0.0"
   resolved "https://registry.yarnpkg.com/camelize/-/camelize-1.0.0.tgz#164a5483e630fa4321e5af07020e531831b2609b"
@@ -8236,7 +8247,7 @@ debuglog@^1.0.1:
   resolved "https://registry.yarnpkg.com/debuglog/-/debuglog-1.0.1.tgz#aa24ffb9ac3df9a2351837cfb2d279360cd78492"
   integrity sha1-qiT/uaw9+aI1GDfPstJ5NgzXhJI=
 
-decamelize-keys@^1.0.0:
+decamelize-keys@^1.0.0, decamelize-keys@^1.1.0:
   version "1.1.0"
   resolved "https://registry.yarnpkg.com/decamelize-keys/-/decamelize-keys-1.1.0.tgz#d171a87933252807eb3cb61dc1c1445d078df2d9"
   integrity sha1-0XGoeTMlKAfrPLYdwcFEXQeN8tk=
@@ -10443,7 +10454,7 @@ find-up@^2.0.0, find-up@^2.1.0:
   dependencies:
     locate-path "^2.0.0"
 
-find-up@^4.0.0:
+find-up@^4.0.0, find-up@^4.1.0:
   version "4.1.0"
   resolved "https://registry.yarnpkg.com/find-up/-/find-up-4.1.0.tgz#97afe7d6cdc0bc5928584b7c8d7b16e8a9aa5d19"
   integrity sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==
@@ -11397,6 +11408,11 @@ har-validator@~5.1.0:
     ajv "^6.5.5"
     har-schema "^2.0.0"
 
+hard-rejection@^2.1.0:
+  version "2.1.0"
+  resolved "https://registry.yarnpkg.com/hard-rejection/-/hard-rejection-2.1.0.tgz#1c6eda5c1685c63942766d79bb40ae773cecd883"
+  integrity sha512-VIZB+ibDhx7ObhAe7OVtoEbuP4h/MuOTHJ+J8h/eBXotJYl0fBgR72xDFCKgIh22OJZIOVNxBMWuhAr10r8HdA==
+
 has-ansi@^2.0.0:
   version "2.0.0"
   resolved "https://registry.yarnpkg.com/has-ansi/-/has-ansi-2.0.0.tgz#34f5049ce1ecdf2b0649af3ef24e45ed35416d91"
@@ -14041,6 +14057,11 @@ kind-of@^6.0.0, kind-of@^6.0.2:
   resolved "https://registry.yarnpkg.com/kind-of/-/kind-of-6.0.2.tgz#01146b36a6218e64e58f3a8d66de5d7fc6f6d051"
   integrity sha512-s5kLOcnH0XqDO+FvuaLX8DDjZ18CGFk7VygH40QoKPUQhW4e2rvM0rwUq0t8IQDOwYSeLK01U90OjzBTme2QqA==
 
+kind-of@^6.0.3:
+  version "6.0.3"
+  resolved "https://registry.yarnpkg.com/kind-of/-/kind-of-6.0.3.tgz#07c05034a6c349fa06e24fa35aa76db4580ce4dd"
+  integrity sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==
+
 klaw@^1.0.0:
   version "1.3.1"
   resolved "https://registry.yarnpkg.com/klaw/-/klaw-1.3.1.tgz#4088433b46b3b1ba259d78785d8e96f73ba02439"
@@ -14882,6 +14903,11 @@ map-obj@^2.0.0:
   resolved "https://registry.yarnpkg.com/map-obj/-/map-obj-2.0.0.tgz#a65cd29087a92598b8791257a523e021222ac1f9"
   integrity sha1-plzSkIepJZi4eRJXpSPgISIqwfk=
 
+map-obj@^4.0.0:
+  version "4.1.0"
+  resolved "https://registry.yarnpkg.com/map-obj/-/map-obj-4.1.0.tgz#b91221b542734b9f14256c0132c897c5d7256fd5"
+  integrity sha512-glc9y00wgtwcDmp7GaE/0b0OnxpNJsVf3ael/An6Fe2Q51LLwN1er6sdomLRzz5h0+yMpiYLhWYF5R7HeqVd4g==
+
 map-or-similar@^1.5.0:
   version "1.5.0"
   resolved "https://registry.yarnpkg.com/map-or-similar/-/map-or-similar-1.5.0.tgz#6de2653174adfb5d9edc33c69d3e92a1b76faf08"
@@ -15115,6 +15141,25 @@ meow@^5.0.0:
     trim-newlines "^2.0.0"
     yargs-parser "^10.0.0"
 
+meow@^7.0.1:
+  version "7.0.1"
+  resolved "https://registry.yarnpkg.com/meow/-/meow-7.0.1.tgz#1ed4a0a50b3844b451369c48362eb0515f04c1dc"
+  integrity sha512-tBKIQqVrAHqwit0vfuFPY3LlzJYkEOFyKa3bPgxzNl6q/RtN8KQ+ALYEASYuFayzSAsjlhXj/JZ10rH85Q6TUw==
+  dependencies:
+    "@types/minimist" "^1.2.0"
+    arrify "^2.0.1"
+    camelcase "^6.0.0"
+    camelcase-keys "^6.2.2"
+    decamelize-keys "^1.1.0"
+    hard-rejection "^2.1.0"
+    minimist-options "^4.0.2"
+    normalize-package-data "^2.5.0"
+    read-pkg-up "^7.0.1"
+    redent "^3.0.0"
+    trim-newlines "^3.0.0"
+    type-fest "^0.13.1"
+    yargs-parser "^18.1.3"
+
 merge-anything@^2.2.4:
   version "2.4.3"
   resolved "https://registry.yarnpkg.com/merge-anything/-/merge-anything-2.4.3.tgz#a5689b823c88d0c712fd2916bd1e1b4c3533cad8"
@@ -15273,6 +15318,11 @@ min-document@^2.19.0:
   dependencies:
     dom-walk "^0.1.0"
 
+min-indent@^1.0.0:
+  version "1.0.1"
+  resolved "https://registry.yarnpkg.com/min-indent/-/min-indent-1.0.1.tgz#a63f681673b30571fbe8bc25686ae746eefa9869"
+  integrity sha512-I9jwMn07Sy/IwOj3zVkVik2JTvgpaykDZEigL6Rx6N9LbMywwUSMtxET+7lVoDLLd3O3IXwJwvuuns8UB/HeAg==
+
 mini-create-react-context@^0.3.0:
   version "0.3.2"
   resolved "https://registry.yarnpkg.com/mini-create-react-context/-/mini-create-react-context-0.3.2.tgz#79fc598f283dd623da8e088b05db8cddab250189"
@@ -15345,6 +15395,15 @@ minimist-options@^3.0.1:
     arrify "^1.0.1"
     is-plain-obj "^1.1.0"
 
+minimist-options@^4.0.2:
+  version "4.1.0"
+  resolved "https://registry.yarnpkg.com/minimist-options/-/minimist-options-4.1.0.tgz#c0655713c53a8a2ebd77ffa247d342c40f010619"
+  integrity sha512-Q4r8ghd80yhO/0j1O3B2BjweX3fiHg9cdOwjJd2J76Q135c+NDxGCqdYKQ1SKBuFfgWbAUzBfvYjPUEeNgqN1A==
+  dependencies:
+    arrify "^1.0.1"
+    is-plain-obj "^1.1.0"
+    kind-of "^6.0.3"
+
 minimist@0.0.8:
   version "0.0.8"
   resolved "https://registry.yarnpkg.com/minimist/-/minimist-0.0.8.tgz#857fcabfc3397d2625b8228262e86aa7a011b05d"
@@ -18641,6 +18700,11 @@ quick-lru@^1.0.0:
   resolved "https://registry.yarnpkg.com/quick-lru/-/quick-lru-1.1.0.tgz#4360b17c61136ad38078397ff11416e186dcfbb8"
   integrity sha1-Q2CxfGETatOAeDl/8RQW4Ybc+7g=
 
+quick-lru@^4.0.1:
+  version "4.0.1"
+  resolved "https://registry.yarnpkg.com/quick-lru/-/quick-lru-4.0.1.tgz#5b8878f113a58217848c6482026c73e1ba57727f"
+  integrity sha512-ARhCpm70fzdcvNQfPoy49IaanKkTlRWF2JMzqhcJbhSFRZv7nPTvZJdcY7301IPmvW+/p0RgIWnQDLJxifsQ7g==
+
 rabin@^1.6.0:
   version "1.6.0"
   resolved "https://registry.yarnpkg.com/rabin/-/rabin-1.6.0.tgz#e05690b13056f08c80098e3ad71b90530038e355"
@@ -19262,6 +19326,15 @@ read-pkg-up@^6.0.0:
     read-pkg "^5.1.1"
     type-fest "^0.5.0"
 
+read-pkg-up@^7.0.1:
+  version "7.0.1"
+  resolved "https://registry.yarnpkg.com/read-pkg-up/-/read-pkg-up-7.0.1.tgz#f3a6135758459733ae2b95638056e1854e7ef507"
+  integrity sha512-zK0TB7Xd6JpCLmlLmufqykGE+/TlOePD6qKClNW7hHDKFh/J7/7gCWGR7joEQEW1bKq3a3yUZSObOoWLFQ4ohg==
+  dependencies:
+    find-up "^4.1.0"
+    read-pkg "^5.2.0"
+    type-fest "^0.8.1"
+
 read-pkg@^1.0.0:
   version "1.1.0"
   resolved "https://registry.yarnpkg.com/read-pkg/-/read-pkg-1.1.0.tgz#f5ffaa5ecd29cb31c0474bca7d756b6bb29e3f28"
@@ -19289,7 +19362,7 @@ read-pkg@^3.0.0:
     normalize-package-data "^2.3.2"
     path-type "^3.0.0"
 
-read-pkg@^5.1.1:
+read-pkg@^5.1.1, read-pkg@^5.2.0:
   version "5.2.0"
   resolved "https://registry.yarnpkg.com/read-pkg/-/read-pkg-5.2.0.tgz#7bf295438ca5a33e56cd30e053b34ee7250c93cc"
   integrity sha512-Ug69mNOpfvKDAc2Q8DRpMjjzdtrnv9HcSMX+4VsZxD1aZ6ZzrIE7rlzXBtWTyhULSMKg076AW6WR5iZpD0JiOg==
@@ -19447,6 +19520,14 @@ redent@^2.0.0:
     indent-string "^3.0.0"
     strip-indent "^2.0.0"
 
+redent@^3.0.0:
+  version "3.0.0"
+  resolved "https://registry.yarnpkg.com/redent/-/redent-3.0.0.tgz#e557b7998316bb53c9f1f56fa626352c6963059f"
+  integrity sha512-6tDA8g98We0zd0GvVeMT9arEOnTw9qM03L9cJXaCjrip1OO764RDBLBfrB4cwzNGDj5OA5ioymC9GkizgWJDUg==
+  dependencies:
+    indent-string "^4.0.0"
+    strip-indent "^3.0.0"
+
 redeyed@~2.1.0:
   version "2.1.1"
   resolved "https://registry.yarnpkg.com/redeyed/-/redeyed-2.1.1.tgz#8984b5815d99cb220469c99eeeffe38913e6cc0b"
@@ -21261,6 +21342,13 @@ strip-indent@^2.0.0:
   resolved "https://registry.yarnpkg.com/strip-indent/-/strip-indent-2.0.0.tgz#5ef8db295d01e6ed6cbf7aab96998d7822527b68"
   integrity sha1-XvjbKV0B5u1sv3qrlpmNeCJSe2g=
 
+strip-indent@^3.0.0:
+  version "3.0.0"
+  resolved "https://registry.yarnpkg.com/strip-indent/-/strip-indent-3.0.0.tgz#c32e1cee940b6b3432c771bc2c54bcce73cd3001"
+  integrity sha512-laJTa3Jb+VQpaC6DseHhF7dXVqHTfJPCRDaEbid/drOhgitgYku/letMUqOXFoWV0zIIUbjpdH2t+tYj4bQMRQ==
+  dependencies:
+    min-indent "^1.0.0"
+
 strip-json-comments@^2.0.1, strip-json-comments@~2.0.1:
   version "2.0.1"
   resolved "https://registry.yarnpkg.com/strip-json-comments/-/strip-json-comments-2.0.1.tgz#3c531942e908c2697c0ec344858c286c7ca0a60a"
@@ -22086,6 +22174,11 @@ trim-newlines@^2.0.0:
   resolved "https://registry.yarnpkg.com/trim-newlines/-/trim-newlines-2.0.0.tgz#b403d0b91be50c331dfc4b82eeceb22c3de16d20"
   integrity sha1-tAPQuRvlDDMd/EuC7s6yLD3hbSA=
 
+trim-newlines@^3.0.0:
+  version "3.0.0"
+  resolved "https://registry.yarnpkg.com/trim-newlines/-/trim-newlines-3.0.0.tgz#79726304a6a898aa8373427298d54c2ee8b1cb30"
+  integrity sha512-C4+gOpvmxaSMKuEf9Qc134F1ZuOHVXKRbtEflf4NTtuuJDEIJ9p5PXsalL8SkeRw+qit1Mo+yuvMPAKwWg/1hA==
+
 trim-off-newlines@^1.0.0:
   version "1.0.1"
   resolved "https://registry.yarnpkg.com/trim-off-newlines/-/trim-off-newlines-1.0.1.tgz#9f9ba9d9efa8764c387698bcbfeb2c848f11adb3"
@@ -22284,6 +22377,11 @@ type-fest@^0.11.0:
   resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.11.0.tgz#97abf0872310fed88a5c466b25681576145e33f1"
   integrity sha512-OdjXJxnCN1AvyLSzeKIgXTXxV+99ZuXl3Hpo9XpJAv9MBcHrrJOQ5kV7ypXOuQie+AmWG25hLbiKdwYTifzcfQ==
 
+type-fest@^0.13.1:
+  version "0.13.1"
+  resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.13.1.tgz#0172cb5bce80b0bd542ea348db50c7e21834d934"
+  integrity sha512-34R7HTnG0XIJcBSn5XhDd7nNFPRcXYRZrBB2O2jdKqYODldSzBAqzsWoZYYvduky73toYS/ESqxPvkDf/F0XMg==
+
 type-fest@^0.3.0:
   version "0.3.1"
   resolved "https://registry.yarnpkg.com/type-fest/-/type-fest-0.3.1.tgz#63d00d204e059474fe5e1b7c011112bbd1dc29e1"
@@ -23818,6 +23916,14 @@ yargs-parser@^15.0.0:
     camelcase "^5.0.0"
     decamelize "^1.2.0"
 
+yargs-parser@^18.1.3:
+  version "18.1.3"
+  resolved "https://registry.yarnpkg.com/yargs-parser/-/yargs-parser-18.1.3.tgz#be68c4975c6b2abf469236b0c870362fab09a7b0"
+  integrity sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==
+  dependencies:
+    camelcase "^5.0.0"
+    decamelize "^1.2.0"
+
 yargs-parser@^5.0.0:
   version "5.0.0"
   resolved "https://registry.yarnpkg.com/yargs-parser/-/yargs-parser-5.0.0.tgz#275ecf0d7ffe05c77e64e7c86e4cd94bf0e1228a"