replay capture delayed
Incident Report for PostHog
Resolved
This incident has been resolved.
Posted Jul 11, 2024 - 14:36 UTC
Update
We've downgraded this and marked ingestion as operational now that we have duplicate ingestion infarstructure

Replay is working normally and we are continuing to process the delayed recordings
Posted Jul 10, 2024 - 13:23 UTC
Update
We've duplicated our ingestion infrastructure so that we can protect current recordings from the delay.

you should no longer see delay on ingestion of current recordings

we'll continue to ingest the delayed recordings in the background
Posted Jul 10, 2024 - 11:19 UTC
Update
We're continuing to work to increase ingestion throughput
Sorry for the continued interruption
Posted Jul 10, 2024 - 09:25 UTC
Update
We're continuing to slowly catch up with ingestion. We're being a little cautious as we don't want to overwhelm kafka while we're making solid process.

Appreciate delays like this are super frustrating and we're really grateful for your patience 🙏
Posted Jul 09, 2024 - 14:00 UTC
Update
We've continued to monitor ingestion overnight. Some kafka partitions are completely caught up, so some people won't experience any delay.

Unfortunately others are still lagging and so you will still see delayed availability of recordings

really sorry for the continued interruption!
Posted Jul 09, 2024 - 05:56 UTC
Update
We're continuing to monitor recovery, apologies for the delay!
Posted Jul 08, 2024 - 18:13 UTC
Monitoring
We've confirmed that the config rollback has resolved the problem, but we've kept ingestion throttled to ensure systems can recover.

We're slowly increasing ingestion rate to allow recovery and will keep monitoring

Sorry for the interruption
Posted Jul 08, 2024 - 14:05 UTC
Identified
A recent config change has unexpectedly impacted processing speed during ingestion of recordings

The change has been rolled back and we're monitoring for recovery
Posted Jul 08, 2024 - 11:31 UTC
This incident affected: US Cloud 🇺🇸 (Event and Data Ingestion).