All Systems Operational

PostHog.com ? Operational
90 days ago
100.0 % uptime
Today
US Cloud 🇺🇸 Operational
90 days ago
99.98 % uptime
Today
App ? Operational
90 days ago
99.97 % uptime
Today
Event and Data Ingestion Operational
90 days ago
99.95 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
99.99 % uptime
Today
Session Replay Ingestion Operational
90 days ago
100.0 % uptime
Today
Destinations ? Operational
90 days ago
100.0 % uptime
Today
EU Cloud 🇪🇺 Operational
90 days ago
99.96 % uptime
Today
App ? Operational
90 days ago
99.88 % uptime
Today
Event and Data Ingestion Operational
90 days ago
99.98 % uptime
Today
Feature Flags and Experiments ? Operational
90 days ago
99.98 % uptime
Today
Session Replay Ingestion Operational
90 days ago
100.0 % uptime
Today
Destinations ? Operational
90 days ago
100.0 % uptime
Today
Support APIs Operational
90 days ago
100.0 % uptime
Today
Update Service Operational
90 days ago
100.0 % uptime
Today
License Server Operational
90 days ago
100.0 % uptime
Today
AWS US 🇺🇸 Operational
AWS ec2-us-east-1 Operational
AWS elb-us-east-1 Operational
AWS rds-us-east-1 Operational
AWS elasticache-us-east-1 Operational
AWS kafka-us-east-1 Operational
AWS EU 🇪🇺 Operational
AWS elb-eu-central-1 Operational
AWS elasticache-eu-central-1 Operational
AWS rds-eu-central-1 Operational
AWS ec2-eu-central-1 Operational
AWS kafka-eu-central-1 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
US Ingestion End to End Time ?
Fetching
US Decide Endpoint Response Time
Fetching
US App Response Time
Fetching
US Event/Data Ingestion Response Time
Fetching
EU Ingestion End to End Time ?
Fetching
EU App Response Time
Fetching
EU Decide Endpoint Response Time
Fetching
EU Event/Data Ingestion Endpoint Response Time
Fetching
Apr 26, 2025

No incidents reported today.

Apr 25, 2025
Resolved - We identified undetected underprovisioning in one of our network components.

We scaled this up now and working on a fix to mitigate this long-term.

Thank you for your patience.

Apr 25, 11:03 UTC
Update - Performance and error rate are back to normal levels.

we're still investigating the root cause for this issue.

Apr 25, 09:45 UTC
Update - We are continuing to investigate this issue.

Notice about US: this incident never affected the US environment. The "partial outage" status was wrong for that. We will correct this later. apologies for the inconvencience

Apr 25, 09:32 UTC
Update - The error rate has gone down, we're still looking for the root cause.
Apr 25, 09:28 UTC
Investigating - Elevated error rates are coming up again, we're investigating
Apr 25, 08:52 UTC
Monitoring - We identified a surge in memory usage and workload eviction events. We scaled up feature flags and web app to mitigate.

We're monitoring this.

Apr 25, 08:20 UTC
Update - Situation has calmed down after scaling up resources. We're still investigating the root cause.

Notice: in an earlier message, it was reported that this was about the US region. This was wrong, this is only about the EU region. Apologies for the initial wrong reporting

Apr 25, 08:12 UTC
Update - We are continuing to investigate this issue.
Apr 25, 08:03 UTC
Investigating - We're experiencing an elevated level of API errors incl feature flags and are currently looking into the issue.
Apr 25, 08:02 UTC
Apr 24, 2025

No incidents reported.

Apr 23, 2025
Resolved - This incident has been resolved.
Apr 23, 02:06 UTC
Monitoring - We're monitoring the ingestion pipeline, as it processes the delayed messages. We're estimating that the system will fully recover within an hour.
Apr 23, 00:01 UTC
Update - We are still investigating intermittent latency spikes in the event ingestion pipeline. Events are still being processed with a delay, which should decrease over time.
Apr 22, 21:53 UTC
Update - We are still investigating the root cause of the issue. Events are still delayed but the delay is no longer increasing. We hope to have a resolution shortly
Apr 22, 15:26 UTC
Investigating - Our data processing infrastructure is running behind which is causing inaccuracies in the reporting tools. No data has been lost and the system should be caught up shortly.
Apr 22, 13:22 UTC
Apr 22, 2025
Resolved - This incident was resolved over the weekend.
Apr 22, 08:50 UTC
Monitoring - We've shed load and haven't seen errors re-occur yet. We'll continue monitoring this over the weekend.
Apr 17, 22:59 UTC
Investigating - The API query endpoint is throwing intermittent 500 errors due to capacity limits on our end. We are working to fix this on our end and make the errors more clear.
If known valid queries are failing with 500s, we recommend retrying queries with exponential backoff.

Apr 17, 19:56 UTC
Apr 21, 2025

No incidents reported.

Apr 20, 2025

No incidents reported.

Apr 19, 2025

No incidents reported.

Apr 18, 2025
Resolved - Bug fixed, ingestion workers scaled back up and lag recovering rapidly. No data loss should be observable.
Apr 18, 11:05 UTC
Monitoring - We've identified the root cause of the issue. We are reprocessing exception events and continuing to monitor to make sure the pipeline fully recovers.
Apr 18, 11:02 UTC
Identified - We are currently experiencing downtime in our error tracking data pipeline, while a bug is resolved. No data loss has occurred.
Apr 18, 09:47 UTC
Apr 17, 2025
Apr 16, 2025

No incidents reported.

Apr 15, 2025
Resolved - After adding more database capacities feature flag evaluation has recovered to normal values.

We close this incident now, but keep monitoring.
We're working on a long term fix.

Apologies for the inconvenience.

Apr 15, 07:15 UTC
Monitoring - We saw a surge in feature flag evaluations and increased backend and database capacity. Seeing first signs of recovery.
Apr 15, 06:54 UTC
Investigating - US: We're experiencing an elevated level of feature flags API errors and are currently investigating.
Apr 15, 06:18 UTC
Apr 14, 2025

No incidents reported.

Apr 13, 2025

No incidents reported.

Apr 12, 2025
Resolved - We've resolved the issue and ingestion has caught up to real time
Apr 12, 17:06 UTC
Update - We're keeping a close eye on our ingestion delay. Events might take up to 35 minutes to show up inside PostHog in our EU Cloud. No data has been lost.
Apr 12, 15:15 UTC
Investigating - Our EU data processing infrastructure is running behind, which is causing inaccuracies in the reporting tools. No data has been lost, and the system should catch up shortly. We're monitoring it closely.
Apr 12, 13:18 UTC