#5 frontend storage performance test

Cerrada
abierta hace 3 años por l1dev · 1 comentarios
l1dev comentado hace 3 años

website for client side performance auditing of storage nodes

So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain.

  • pick some random number of objects (~50, variable)
  • more than one measurement per provider to even out random variability from one request to another
  • allow user to redo the analysis with chosen number of objects

presentation

visualize results with graphs

performance

use hydra

query {
  workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){
    metadata
  }
}

future

files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.

website for client side performance auditing of storage nodes So for example, X storage objects are selected, and then the website tries to fetch these objects from all providers, and records the latencies and downspeeds of each, and shows the result in some kind of a nice table or charts. Everything is client side, and only the query node is needed in order to get basic info from the chain. - pick some random number of objects (~50, variable) - more than one measurement per provider to even out random variability from one request to another - allow user to redo the analysis with chosen number of objects # presentation visualize results with graphs # performance use [hydra](https://hydra.joystream.org/graphql) ``` query { workers(where: {metadata_contains: "http", isActive_eq: true, type_eq: STORAGE}){ metadata } } ``` # future files are not split up, so there is no kind of erasure coding as its called, where you can split up in N pieces, and then you only need M < N to reconstruct it, we are doing pure redundant replication, so the same file is stored fully x number of times, but not by all storage providers as is done currently.
l1dev comentado hace 3 años
Propietario
https://joystreamstats.live/storage
Inicie sesión para unirse a esta conversación.
Sin Milestone
Sin asignado
1 participantes
Cargando...
Cancelar
Guardar
Aún no existe contenido.