Service degradation Sep 10 & Sep 11, 2018


During the past two days (Sep 10, 7-12 pm; Sep 11 3-9:30 pm), we experienced a sudden and massive increase of request load on the instance. This resulted in an unexpected user interface behaviour. Some users reported timeouts or very slow response times during the incident windows. Furthermore, the request load generated a ripple effect as devices were offloading their queued data.

Due to the particular nature of the incident, our active/active/active setup ( runs in three independent data centers) did not trigger. The unexpected load was generated in a simulated fashion by a customer and did not reflect IoT traffic behaviour. We are applying technical measures to prevent such load in future as it does not conform to our terms of service.

We regret the incident and apologize for any inconvenience this situation may have caused. Due to the incident, our service level for API availability is below the target 99.9% at currently 99.4% for the previous 30 days. However, we would like to highlight that no data from production devices was lost.

Have more questions? Submit a request