Difference between revisions of "Work storage /cluster/work/ partially available (10 May 2020)"

From ScientificComputing
Jump to: navigation, search
Line 2: Line 2:
  
 
Please note that only some users are affected by this incident, not all. We will update this news item as soon as we have more information.
 
Please note that only some users are affected by this incident, not all. We will update this news item as soon as we have more information.
 +
 +
'''Affected volumes:'''
 +
 +
<div style="column-count:4">
 +
* /cluster/work/anasta
 +
* /cluster/work/beltrao
 +
* /cluster/work/bewi
 +
* /cluster/work/biol
 +
* /cluster/work/bmlbb
 +
* /cluster/work/borgw
 +
* /cluster/work/bsse_sdsc
 +
* /cluster/work/cemk
 +
* /cluster/work/chenp
 +
* /cluster/work/cobi
 +
* /cluster/work/compmech
 +
* /cluster/work/coss
 +
* /cluster/work/cotterell
 +
* /cluster/work/cpesm
 +
* /cluster/work/demello
 +
* /cluster/work/drzrh
 +
* /cluster/work/faist
 +
* /cluster/work/fcoletti
 +
* /cluster/work/flatt
 +
* /cluster/work/gdc
 +
* /cluster/work/gess
 +
* /cluster/work/gfb
 +
* /cluster/work/grewe
 +
* /cluster/work/hahnl
 +
* /cluster/work/harra
 +
* /cluster/work/hilliges
 +
* /cluster/work/hora
 +
* /cluster/work/ibk_chatzi
 +
* /cluster/work/ifd
 +
* /cluster/work/igp_psr
 +
* /cluster/work/igt_tunnel
 +
* /cluster/work/infk_mtc
 +
* /cluster/work/itphys
 +
* /cluster/work/ivt_vpl
 +
* /cluster/work/jesch
 +
* /cluster/work/karlen
 +
* /cluster/work/kovalenko
 +
* /cluster/work/krek
 +
* /cluster/work/kurtcuoglu
 +
* /cluster/work/lav
 +
* /cluster/work/lke
 +
* /cluster/work/lpc
 +
* /cluster/work/mandm
 +
* /cluster/work/mansuy
 +
* /cluster/work/math
 +
* /cluster/work/moor
 +
* /cluster/work/nenad
 +
* /cluster/work/nme
 +
* /cluster/work/pacbio
 +
* /cluster/work/pausch
 +
* /cluster/work/petro
 +
* /cluster/work/pueschel
 +
* /cluster/work/puzrin
 +
* /cluster/work/qchem
 +
* /cluster/work/reddy
 +
* /cluster/work/refcosmo
 +
* /cluster/work/reiher
 +
* /cluster/work/riner
 +
* /cluster/work/rjeremy
 +
* /cluster/work/rre
 +
* /cluster/work/rsl
 +
* /cluster/work/sachan
 +
* /cluster/work/sorkine
 +
* /cluster/work/sornette
 +
* /cluster/work/stocke
 +
* /cluster/work/swissloop
 +
* /cluster/work/woern
 +
* /cluster/work/yang
 +
</div>
 +
 +
'''Volumes not affected:'''
 +
 +
* /cluster/work/climate
 +
* /cluster/work/cmbm
 +
* /cluster/work/cvl
 +
* /cluster/work/gfd
 +
* /cluster/work/igc
 +
* /cluster/work/magna
 +
* /cluster/work/noiray
 +
* /cluster/work/refregier
 +
* /cluster/work/tnu
 +
* /cluster/work/wenderoth
 +
* /cluster/work/zhang
  
 
''We are sorry for the inconvenience.''
 
''We are sorry for the inconvenience.''
  
 
==Updates==
 
==Updates==

Revision as of 11:09, 10 May 2022

This morning a storage controller crashed which affects the /cluster/work storage. Parts of the /cluster/work/ are temporarily unavailable. Our storage specialists are in close contact with the vendor and work on bringing back the storage system as fast as possible.

Please note that only some users are affected by this incident, not all. We will update this news item as soon as we have more information.

Affected volumes:

  • /cluster/work/anasta
  • /cluster/work/beltrao
  • /cluster/work/bewi
  • /cluster/work/biol
  • /cluster/work/bmlbb
  • /cluster/work/borgw
  • /cluster/work/bsse_sdsc
  • /cluster/work/cemk
  • /cluster/work/chenp
  • /cluster/work/cobi
  • /cluster/work/compmech
  • /cluster/work/coss
  • /cluster/work/cotterell
  • /cluster/work/cpesm
  • /cluster/work/demello
  • /cluster/work/drzrh
  • /cluster/work/faist
  • /cluster/work/fcoletti
  • /cluster/work/flatt
  • /cluster/work/gdc
  • /cluster/work/gess
  • /cluster/work/gfb
  • /cluster/work/grewe
  • /cluster/work/hahnl
  • /cluster/work/harra
  • /cluster/work/hilliges
  • /cluster/work/hora
  • /cluster/work/ibk_chatzi
  • /cluster/work/ifd
  • /cluster/work/igp_psr
  • /cluster/work/igt_tunnel
  • /cluster/work/infk_mtc
  • /cluster/work/itphys
  • /cluster/work/ivt_vpl
  • /cluster/work/jesch
  • /cluster/work/karlen
  • /cluster/work/kovalenko
  • /cluster/work/krek
  • /cluster/work/kurtcuoglu
  • /cluster/work/lav
  • /cluster/work/lke
  • /cluster/work/lpc
  • /cluster/work/mandm
  • /cluster/work/mansuy
  • /cluster/work/math
  • /cluster/work/moor
  • /cluster/work/nenad
  • /cluster/work/nme
  • /cluster/work/pacbio
  • /cluster/work/pausch
  • /cluster/work/petro
  • /cluster/work/pueschel
  • /cluster/work/puzrin
  • /cluster/work/qchem
  • /cluster/work/reddy
  • /cluster/work/refcosmo
  • /cluster/work/reiher
  • /cluster/work/riner
  • /cluster/work/rjeremy
  • /cluster/work/rre
  • /cluster/work/rsl
  • /cluster/work/sachan
  • /cluster/work/sorkine
  • /cluster/work/sornette
  • /cluster/work/stocke
  • /cluster/work/swissloop
  • /cluster/work/woern
  • /cluster/work/yang

Volumes not affected:

  • /cluster/work/climate
  • /cluster/work/cmbm
  • /cluster/work/cvl
  • /cluster/work/gfd
  • /cluster/work/igc
  • /cluster/work/magna
  • /cluster/work/noiray
  • /cluster/work/refregier
  • /cluster/work/tnu
  • /cluster/work/wenderoth
  • /cluster/work/zhang

We are sorry for the inconvenience.

Updates