这是indexloc提供的服务,不要输入任何密码
Skip to content

implement count aggregates for group by #145

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

hallettj
Copy link
Collaborator

This completes the basic functionality for group-by started in #144 by implementing all forms of count aggregations.

Copy link
Contributor

@daniel-chambers daniel-chambers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@hallettj
Copy link
Collaborator Author

I'm waiting for #144 to be merged before merging this PR

hallettj added a commit that referenced this pull request Mar 1, 2025
Implements most of the functionality for the capability `query.aggregates.group_by`. There are still a couple of things to follow up on.

Counts are not implemented for group by queries yet. I'll follow up on those in [ENG-1568](https://linear.app/hasura/issue/ENG-1568/[mongodb]-implement-count-for-group-by). (Counts are implemented in #145 which can be merged after this PR is merged.)

There is a bug involving multiple references to the same relationship that should be resolved. I'll follow up in [ENG-1569](https://linear.app/hasura/issue/ENG-1569).

While working on this I removed the custom "count" aggregation - it is redundant, and I've been meaning to do that for a while. Users can use the standard count aggregations instead.

There is a change in here that explicitly converts aggregate result values for "average" and "sum" aggregations to the result types declared in the schema. This is necessary to avoid errors in response serialization for groups when aggregating over 128-bit decimal values. I applied the same type conversion for group and for root aggregates for consistency. This does mean there will be some loss of precision in those cases. But it also means we won't get back a JSON string in some cases, and a JSON number in others.
Base automatically changed from jessehallett/eng-1486-mongodb-implement-group-by to main March 1, 2025 22:31
@hallettj hallettj force-pushed the jessehallett/eng-1568-mongodb-implement-count-for-group-by branch from 0a7a13a to fceb283 Compare March 3, 2025 18:03
@hallettj hallettj merged commit c44aef9 into main Mar 3, 2025
1 check passed
@hallettj hallettj deleted the jessehallett/eng-1568-mongodb-implement-count-for-group-by branch March 3, 2025 18:41
hallettj added a commit that referenced this pull request Mar 3, 2025
…#146)

The logic for count aggregations for grouped data in #145 was an improvement over what was already in place for ungrouped data. Instead of leaving two separate code paths with different logic I unified logic for both to use the new logic from #145.

This allowed removing unnecessary uses of the `$facet` stage which forks the aggregation pipeline. Previously every aggregate used a separate facet. Now we only need facets for incompatibly-grouped data - one query that combines ungrouped aggregates with groups, or that combines either of those with field selection. This required additional changes to response processing to remove facet unpacking logic. The new system does mean that there are a couple of places where we have to explicitly fill in null or zero results for aggregate queries with no matching rows.

While I was going over aggregate logic I noticed some unescaped field references when referencing aggregate result names. I fixed these to use `ColumnRef` which escapes names. While I was at it I removed the last couple of uses of the old `getField` helper, and replaced them with the new-and-improved `ColumnRef`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants