All Systems Operational

PostHog.com ? Operational
90 days ago
100.0 % uptime
Today
US Cloud 🇺🇸 Operational
90 days ago
99.98 % uptime
Today
App ? Operational
90 days ago
99.97 % uptime
Today
Event and Data Ingestion Success ? Operational
90 days ago
100.0 % uptime
Today
Event and Data Ingestion Lag ? Operational
90 days ago
99.95 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
99.99 % uptime
Today
Session Replay Ingestion Operational
90 days ago
100.0 % uptime
Today
Destinations ? Operational
90 days ago
100.0 % uptime
Today
API /query Endpoint Operational
90 days ago
99.99 % uptime
Today
EU Cloud 🇪🇺 Operational
90 days ago
99.99 % uptime
Today
App ? Operational
90 days ago
99.97 % uptime
Today
Event and Data Ingestion Success ? Operational
90 days ago
100.0 % uptime
Today
Event and Data Ingestion Lag ? Operational
90 days ago
100.0 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
99.98 % uptime
Today
Session Replay Ingestion Operational
90 days ago
100.0 % uptime
Today
Destinations ? Operational
90 days ago
100.0 % uptime
Today
API /query Endpoint Operational
90 days ago
100.0 % uptime
Today
Support APIs Operational
90 days ago
100.0 % uptime
Today
Update Service Operational
90 days ago
100.0 % uptime
Today
License Server Operational
90 days ago
100.0 % uptime
Today
AWS US 🇺🇸 Operational
AWS ec2-us-east-1 Operational
AWS elb-us-east-1 Operational
AWS rds-us-east-1 Operational
AWS elasticache-us-east-1 Operational
AWS kafka-us-east-1 Operational
AWS EU 🇪🇺 Operational
AWS elb-eu-central-1 Operational
AWS elasticache-eu-central-1 Operational
AWS rds-eu-central-1 Operational
AWS ec2-eu-central-1 Operational
AWS kafka-eu-central-1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
US Ingestion End to End Time ?
Fetching
US Decide Endpoint Response Time
Fetching
US App Response Time
Fetching
US Event/Data Ingestion Response Time
Fetching
EU Ingestion End to End Time ?
Fetching
EU App Response Time
Fetching
EU Decide Endpoint Response Time
Fetching
EU Event/Data Ingestion Endpoint Response Time
Fetching
Jun 14, 2025

No incidents reported today.

Jun 13, 2025

No incidents reported.

Jun 12, 2025
Resolved - The GCP incident is largely recovered. We'll keep tabs on that for ways we can improve our response to something like this in the future.
Jun 12, 23:44 UTC
Investigating - GCP and Google are experiencing a wide-scale outage currently https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW

All of our systems are operational with the exception of some workloads dependent on Google:
- Google Auth
- Batch exports to Google Cloud Storage
- Batch exports to Big Query

Jun 12, 18:45 UTC
Jun 11, 2025

No incidents reported.

Jun 10, 2025
Resolved - We've caught up on backlogs, deployed a fix, and everything should be back to normal. Thanks for your patience!
Jun 10, 16:22 UTC
Monitoring - We've spotted a small number of cohorts are stuck in a recalculating state, and a larger number are taking longer than 24 hours to automatically recalculate as they should. We've identified the issue and have deployed a fix.
Jun 10, 15:41 UTC
Jun 9, 2025

No incidents reported.

Jun 8, 2025

No incidents reported.

Jun 7, 2025

No incidents reported.

Jun 6, 2025
Resolved - All queries are running as expected
Jun 6, 22:34 UTC
Monitoring - We have found and remediated the issue. Query times have already improved. We are just waiting on our infrastructure to fully recover before closing this issue out.
Jun 6, 15:21 UTC
Investigating - We've been alerted to an increase in query times. We're currently investigating the issue, and will provide an update once we identify the root cause.
Jun 6, 13:44 UTC
Jun 5, 2025

No incidents reported.

Jun 4, 2025
Resolved - The errors have been resolved
Jun 4, 18:46 UTC
Investigating - We're seeing elevated errors loading the posthog interface. We're investigating and we'll update you as we know more.
Jun 4, 16:02 UTC
Jun 3, 2025

No incidents reported.

Jun 2, 2025
Resolved - The ingestion delay incident has been resolved
Jun 2, 18:56 UTC
Identified - Due to delays in a maintenance process, our data processing infrastructure is running behind which is causing inaccuracies in the reporting tools. No data has been lost and the system should be caught up shortly.
Jun 2, 12:50 UTC
Resolved - This incident has been resolved.
Jun 2, 14:38 UTC
Update - Situation is back to normal.
We found the root cause being in our networking stack.

We're preparing a long term fix for it.

Thanks for your patience!

Jun 2, 13:38 UTC
Update - The situation seemed to have calmed down, we're investigating the root cause.
Jun 2, 12:46 UTC
Investigating - We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.
Jun 2, 12:39 UTC
Jun 1, 2025

No incidents reported.

May 31, 2025
Resolved - The backlog has been fully processed and event ingestion is back to normal. Thank you for bearing with us and apologies for the disruption.
May 31, 05:29 UTC
Update - We are consuming the lagged backlog and still monitoring the progress.
May 30, 21:22 UTC
Update - We have increased the consumer resources to speed up the resolution and keep monitoring the rate.
May 30, 17:55 UTC
Update - We identified another related issue and rolled the appropriate fix. The lag should be down and we keep monitoring it.
May 30, 15:36 UTC
Monitoring - We identified the issue and rolled out a fix. The event lag is dropping, and we keep monitoring it.
May 30, 12:23 UTC
Investigating - We're currently falling behind on event ingestion. No data loss has occurred, and we're actively investigating the issue.
May 30, 11:48 UTC