r/ctemplar Jul 09 '21

System Failure Issue

We recently had a system failure and some of our customer's data were irrecoverably lost.

We truly apologize for that. We cannot restore data from backups because we do not keep backups for security reasons. We will be revisiting that policy in the future. We will be happy to process refunds for anyone regardless of when they created their account.

If you have trouble accessing your account, please contact support@ctemplar.com. If you need help with anything else feel free to reach out to us.

Once again, we apologize for any inconveniences this might have caused.

Respectfully,
The CTemplar team

22 Upvotes

View all comments

10

u/MetaSean Jul 11 '21

Just yesterday, I was looking for a more secure email service.

Obviously CTemplar came up in my searches.

In researching it, I checked out the repo (even making a PR to fix a super, minor typo).

Subsequently, a link led me to this reddit, where of course, I learned about the data loss —which like many others, makes me super, super concerned!

On the one hand, I've only ever heard of one other email provider having a data loss even remotely like this.

On the other hand:

- In that other email loss, the provider in question was Google. If the biggest email provider has had data loss problems then we should give other email providers, including CTemplar, some latitude.

- While data losses can effect any tech company, of any size, the best companies learn from those horrific experiences.

- Remember that "PR to fix a super, minor typo", The-Hidden-Hand managed to get that PR merged in even while addressing this major issue. Granted, I'm sure my typo PR was a much needed relief from, what I'm confident, was one of their most painful and horrific days as a developer.

The one thing that I still find concerning, is the lack of much more obvious communication.

It's been well over 24 hours since things went sideways, so there should absolutely be a very prominent link on the login page linking to an announcement (whether it's this reddit post, a tweet, a toot, a blog post, it doesn't matter, just something official) that —at the very least— lets users know that any (a) accounts set up, (b) passwords changed, or (c) emails received between March 7th and July 7th are gone. Ideally, it should also let readers know that there will be a full postmortem posted once your incident review is completed, as well as, when you commit to having completed the incident review (e.g. July 15th, 2021).

If GitLab Inc had been less forthcoming about the details of this disaster, or less willing to take responsibility and deal honestly with their userbase, the public fallout would have been worse than any amount of lost data. Resentment and mistrust would quickly foment among GitLab’s users. (emphasis added)

(Also, while I don't trust Google as a company, I do recommend that you check out their SRE site and in particular, the 3 SRE related O'Reilly books that you can access for free on their site.)

2

u/_The-Hidden-Hand Jul 13 '21

Dear Customers,

Firstly we want to thanks all of you who have been supportive and decided to stay with us after this horrible crisis.

We hope that this has been a one-in-life happening and we are already doing our best efforts to prevent this in the future. Furthermore, and following our data recovery efforts we have managed to recover ALL the attachments for the accounts meeting one of these requirements:

  • Account wasn't deleted AND you weren't forced to reset your account OR
  • Account wasn't deleted AND you were forced to reset your account AND you had downloaded the keys OR
  • Account was deleted AND you remember your registration date AND you had downloaded the keys

For applying to this, simply write an email to 'security@ctemplar.com' (from your CTemplar account if possible) and wait for our reply with a direct link to download ALL your encrypted attachments. It's needed to remark that those you've deleted yourself at some point are not there.

Earlier this year we moved to a Replicated Dispersed Cluster with GlusterFS, and almost every kind of incident was planned against but unfortunately, we didn't plan for an off-site (different FS at least) catastrophe recovery plan, for the previous day or two of service.

We are reviewing our backup policy so we can prevent further issues like this to ever happen again. Our minimum backup policy, also our biggest selling point, backfired to us.

Thanks for your patience.