Difference between revisions of "Work storage /cluster/work/ partially available (10 May 2020)"

From ScientificComputing
Jump to: navigation, search
(Updates)
Line 1: Line 1:
 
This morning a storage controller crashed which affects the '''/cluster/work''' storage. Parts of the /cluster/work/ are '''temporarily unavailable'''. Our storage specialists are in close contact with the vendor and work on bringing back the storage system as fast as possible.
 
This morning a storage controller crashed which affects the '''/cluster/work''' storage. Parts of the /cluster/work/ are '''temporarily unavailable'''. Our storage specialists are in close contact with the vendor and work on bringing back the storage system as fast as possible.
  
Please note that only some users are affected by this incident, not all. We will update this news item as soon as we have more information.
+
Please note that only some users are affected by this incident, not all.
  
 
'''Affected volumes:'''
 
'''Affected volumes:'''
Line 89: Line 89:
 
* /cluster/work/wenderoth
 
* /cluster/work/wenderoth
 
* /cluster/work/zhang
 
* /cluster/work/zhang
 +
 +
We will update this news item whenever there is some new information.
  
 
''We are sorry for the inconvenience.''
 
''We are sorry for the inconvenience.''

Revision as of 11:17, 10 May 2022

This morning a storage controller crashed which affects the /cluster/work storage. Parts of the /cluster/work/ are temporarily unavailable. Our storage specialists are in close contact with the vendor and work on bringing back the storage system as fast as possible.

Please note that only some users are affected by this incident, not all.

Affected volumes:

  • /cluster/work/anasta
  • /cluster/work/beltrao
  • /cluster/work/bewi
  • /cluster/work/biol
  • /cluster/work/bmlbb
  • /cluster/work/borgw
  • /cluster/work/bsse_sdsc
  • /cluster/work/cemk
  • /cluster/work/chenp
  • /cluster/work/cobi
  • /cluster/work/compmech
  • /cluster/work/coss
  • /cluster/work/cotterell
  • /cluster/work/cpesm
  • /cluster/work/demello
  • /cluster/work/drzrh
  • /cluster/work/faist
  • /cluster/work/fcoletti
  • /cluster/work/flatt
  • /cluster/work/gdc
  • /cluster/work/gess
  • /cluster/work/gfb
  • /cluster/work/grewe
  • /cluster/work/hahnl
  • /cluster/work/harra
  • /cluster/work/hilliges
  • /cluster/work/hora
  • /cluster/work/ibk_chatzi
  • /cluster/work/ifd
  • /cluster/work/igp_psr
  • /cluster/work/igt_tunnel
  • /cluster/work/infk_mtc
  • /cluster/work/itphys
  • /cluster/work/ivt_vpl
  • /cluster/work/jesch
  • /cluster/work/karlen
  • /cluster/work/kovalenko
  • /cluster/work/krek
  • /cluster/work/kurtcuoglu
  • /cluster/work/lav
  • /cluster/work/lke
  • /cluster/work/lpc
  • /cluster/work/mandm
  • /cluster/work/mansuy
  • /cluster/work/math
  • /cluster/work/moor
  • /cluster/work/nenad
  • /cluster/work/nme
  • /cluster/work/pacbio
  • /cluster/work/pausch
  • /cluster/work/petro
  • /cluster/work/pueschel
  • /cluster/work/puzrin
  • /cluster/work/qchem
  • /cluster/work/reddy
  • /cluster/work/refcosmo
  • /cluster/work/reiher
  • /cluster/work/riner
  • /cluster/work/rjeremy
  • /cluster/work/rre
  • /cluster/work/rsl
  • /cluster/work/sachan
  • /cluster/work/sorkine
  • /cluster/work/sornette
  • /cluster/work/stocke
  • /cluster/work/swissloop
  • /cluster/work/woern
  • /cluster/work/yang

Volumes not affected:

  • /cluster/work/climate
  • /cluster/work/cmbm
  • /cluster/work/cvl
  • /cluster/work/gfd
  • /cluster/work/igc
  • /cluster/work/magna
  • /cluster/work/noiray
  • /cluster/work/refregier
  • /cluster/work/tnu
  • /cluster/work/wenderoth
  • /cluster/work/zhang

We will update this news item whenever there is some new information.

We are sorry for the inconvenience.

Updates

2022-05-10 13:20
The problem with the storage controller could not be fixed. It needs to be replaced. We don't know yet, how long the it will take until /cluster/work is back to normal operation (our current guess is 24 to 96 hours). After the replacement we will also run some integrity checks on the data. We will publish another update later this afternoon.