-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Team Experiments Q1 2025 goals #10144
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,36 @@ | ||
### Q4 2024 Objectives | ||
|
||
This quarter, we're focusing on more feature improvements to help users create comprehensive and easy-to-analyze experiments. These improvements will make our product more mature and complete. | ||
|
||
### Objective 1: Finish HogQL rewrite | ||
|
||
We will migrate Experiments to HogQL, making result calculations more reliable and performant. This will also enable the addition of new features listed below. | ||
|
||
### Objective 2: Multiple experiment goals supported + visualized | ||
|
||
We will add support for multiple goal metrics in a single experiment, allowing them to be visualized together and making it easier to interpret results across all metrics simultaneously. | ||
|
||
### Objective 3: Reusable experiment metrics | ||
|
||
We will add the ability to create metric sets that users can save and reuse. This will reduce friction, improve maintainability, and decrease the likelihood of errors. | ||
|
||
### Objective 4: Review and adjust methodology with a statistician | ||
|
||
We will review our current methodology to ensure it is both accurate and easy for our users to understand. | ||
### Q1 2025 Objectives | ||
|
||
This quarter, we will bring new feature improvements to make Experiments more advanced. We'll also focus on improving quality across different areas - data, documentation, and codebase - to help us move faster and deliver a better product. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What features do our competitors have that we don't, and which of those features are the most important to close the feature gap/ There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The proposed new features are ones that:
I’ve used my judgment to prioritize them, taking into account how much we can realistically build in a single quarter. |
||
#### Objective 1: New features | ||
- [Timeseries chart for deltas and credible intervals](https://github.com/PostHog/posthog/issues/26931) @jurajmajerik | ||
- [Winsorization](https://github.com/PostHog/posthog/issues/26060) to exclude extreme values from analyses. @andehen | ||
- [New data collection calculator](https://github.com/PostHog/posthog/issues/26933) @andehen | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How frequently is this a problem for our customers? What % of customers would this impact? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I assume you're asking about Winsorization here. We don’t have data for it, but I suspect many of our users won’t even notice there’s a data quality issue caused by outliers. We absolutely need to solve this, as it impacts correctness. Even if it affects only 10% of experiments, that means 10% of experiments are producing unreliable results. Correctness is a top priority for experiments, and we should proactively fix this, even if customers aren’t explicitly asking for it or aware of the issue. |
||
- [Filtering for experiment results](https://github.com/PostHog/posthog/issues/26934) @danielbachhuber | ||
|
||
#### Objective 2: Data & statistics | ||
- Complete the [migration to updated statistical methods](https://github.com/PostHog/posthog/issues/26713). | ||
- Document each statistical method in detail. | ||
- Write a practical "manual" chapter for every metric type. | ||
- Address neglected cases, such as "average count per user." | ||
- Improve diagnostics to provide more detailed information, like flagging missing exposure data. @andehen | ||
|
||
#### Objective 3: Data Warehouse integration @danielbachhuber | ||
- Ensure a great experience for our pilot users. | ||
- Add support for funnel experiments. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How might you quantify "ensure a great experience for our pilot users"? Also, what does success look like w/r/t adoption numbers? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Feel free to clarify this further and think about what you'd like it to be. In past quarterly plannings, I've seen objectives like "Get positive feedback from 5 pilot customers" specified. |
||
- Graduate the integration out of beta. | ||
|
||
#### Objective 4: UX & UI | ||
- Polish the reusable metrics UI. @jurajmajerik | ||
- Add new summary banners to support multiple metrics. | ||
- Improve product analytics insights when transitioning from experiments. | ||
- [Polish no-code experiments UI](https://github.com/PostHog/posthog/issues/26936) | ||
- Add [in-product tooltips](https://github.com/PostHog/posthog/issues/26937). | ||
|
||
#### Objective 5: Codebase quality | ||
- Improve test coverage for query runners and add missing legacy code test cases. | ||
- Add E2E for the basic flows once the multi-metrics UI stabilizes. | ||
- Avoid global updates from modals that require page reloads on Cancel. | ||
- Refactor experiment logic to remove dependencies on other logics, improve reusability, and delete dead code. | ||
- Set up monitoring for query timeouts. | ||
- Audit Sentry to ensure we’re tracking the most relevant issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like it usually makes more sense - and is easier for people to get excited about - if they're responsible for a single objective (or sometimes two).
So instead of having 5 objectives and everyone responsible for all 5, is there a way to divvy these up and find a theme within them? eg do some of the "new features" also fall into the "Data and statistics" section, and there's one person who's going to take the bulk of that work on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, and I’ve been thinking about that too, but I’ve been struggling to come up with such a structure.
For example, Daniel has been significantly contributing across all five areas, and I suspect he’ll continue doing so next quarter. So what’s the point of assigning Anders to "Data & Statistics" when we’ll all likely end up working on it anyway?
Or, Anders will be working on the new Data Collection calculator, which suits him well since it’s a statistical problem. But it’s also a UI/UX problem, and he’ll implement this end to end.
I guess I’m just struggling with how to break these up so that a single person is responsible for each big objective. We could have one person responsible for an objective with others assigned to subtasks, but what’s the significance of that? Would the objective owner somehow oversee the subtasks? I feel like that’s not really how we work here.
Anyway, I’m open to more feedback on how to restructure this 🙏