#5 frontend storage performance test

Slēgta
l1dev atvēra 3 gadi atpakaļ · 1 komentāri
l1dev komentēja 3 gadi atpakaļ

website for client side performance auditing of storage nodes

So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain.

  • pick some random number of objects (~50, variable)
  • more than one measurement per provider to even out random variability from one request to another
  • allow user to redo the analysis with chosen number of objects

presentation

visualize results with graphs

performance

use hydra

query {
  workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){
    metadata
  }
}

future

files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.

website for client side performance auditing of storage nodes So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain. - pick some random number of objects (~50, variable) - more than one measurement per provider to even out random variability from one request to another - allow user to redo the analysis with chosen number of objects # presentation visualize results with graphs # performance use [hydra](https://hydra.joystream.org/graphql) ``` query { workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){ metadata } } ``` # future files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.
l1dev komentēja 3 gadi atpakaļ
Īpašnieks
https://joystreamstats.live/storage
Pierakstieties, lai pievienotos šai sarunai.
Nav atskaites punktu
Nav atbildīgā
1 dalībnieki
Notiek ielāde...
Atcelt
Saglabāt
Vēl nav satura.