diff --git a/CHANGELOG.md b/CHANGELOG.md index 0b114efe61c49..065bc96c846e7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -655,7 +655,7 @@ Event Triggers support has been added for MS SQL Server. Now, you can invoke ext - cli: fix performance regression with large metadata in `metadata apply` - cli: fix error reporting in `metadata apply` command (#8280) - server: query runtime performance optimizations -- server: fix bug that had disabled expression-based indexes in Postgress variants (fix Zendesk 5146) +- server: fix bug that had disabled expression-based indexes in Postgres variants (fix Zendesk 5146) - server: add optionality to additional postgres-client-cert fields: sslcert, sslkey and sslpassword ## v2.8.1 @@ -1452,7 +1452,7 @@ Response 2: - server: add metadata inconsistency information in `reload_metadata` API call - server: add custom function for case insensitive lookup in session variable in request transformation - server: Improved error messaging for `test_webhook_transform` metadata API endpoint -- server: Webhook Tranforms can now produce `x-www-url-formencoded` bodies. +- server: Webhook Transforms can now produce `x-www-url-formencoded` bodies. - server: Webhook Transforms can now delete request/response bodies explicitly. - server: Fix truncation of session variables with variable length column types in MSSQL (#8158) - server: improve performance of `replace_metadata` for large schemas @@ -1508,7 +1508,7 @@ The optimization can be enabled using the - server: MSSQL generates correct SQL when object relationships are null. - console: add support for remote database relationships - console: enable support for update permissions for mssql -- cli: skip tls verfication for all API requests when `insecure-skip-tls-verify` flag is set (fix #4926) +- cli: skip tls verification for all API requests when `insecure-skip-tls-verify` flag is set (fix #4926) - server: fix issues working with read-only DBs by reverting the need for storing required SQL functions in a `hdb_lib` schema in the user's DB - server: Fix experimental sql optimization read from `HASURA_GRAPHQL_EXPERIMENTAL_FEATURES` or `--experimental-features` @@ -1676,7 +1676,7 @@ count ( - server: fixes JSON ser/de backwards incompatibility introduced for metadata parsing and 'create_remote_relationship' queries (#7906) - console: add sample context section to webhook transforms - cli: `hasura metadata diff` shows diff with more context in directory mode -- cli: revert change to split metadata related to remote schemas into seperate files (introduced in v2.1.0-beta.2) +- cli: revert change to split metadata related to remote schemas into separate files (introduced in v2.1.0-beta.2) ## v2.1.0-beta.3 @@ -1760,11 +1760,11 @@ source. - console: fix v2 metadata imports - console: design cleanup Modify and Add Table forms (close #7454) - console: enable custom graphql root fields for mssql under modify tab -- cli: split remote schema permissions metadata into seperate files (#7033) +- cli: split remote schema permissions metadata into separate files (#7033) - cli: support action request transforms in metadata - cli: make `--database-name` optional in `migrate` subcommands when using a single database (#7434) - cli: support absolute paths in --envfile (#5689) -- cli: split remote schema permissions metadata into seperate files (#7033) +- cli: split remote schema permissions metadata into separate files (#7033) ## v2.0.10 @@ -1802,7 +1802,7 @@ source. - console: fix missing cross-schema computed fields in permission builder - console: add time limits setting to security settings - cli: add support for `network` metadata object -- cli: `hasura migrate apply --all-databases` will return a non zero exit code if operation failed on atleast one database (#7499) +- cli: `hasura migrate apply --all-databases` will return a non zero exit code if operation failed on at least one database (#7499) - cli: `migrate create --from-server` creates the migration and marks it as applied on the server - cli: support `query_tags` in metadata - cli: add `hasura deploy` command @@ -1967,7 +1967,7 @@ NOTE: This only includes the diff between v2.0.0 and v2.0.0-beta.2 - console: update connect database form with SSL certificates - console: add drop table functionality to MS SQL Server tables - console: allow renaming data sources -- console: show error notification for table and cloumn names exceeding 63 characters and trim migration names exceeding 255 characters +- console: show error notification for table and column names exceeding 63 characters and trim migration names exceeding 255 characters - cli: fix version command using stderr as output stream (#6998) ## v2.0.0-alpha.11 @@ -2586,7 +2586,7 @@ arguments. - server: add action-like URL templating for event triggers and remote schemas (fixes #2483) - server: change `created_at` column type from `timestamp` to `timestamptz` for scheduled triggers tables (fix #5722) - server: allow configuring timeouts for actions (fixes #4966) -- server: fix bug which arised when renaming a table which had a manual relationship defined (close #4158) +- server: fix bug which arose when renaming a table which had a manual relationship defined (close #4158) - server: limit the length of event trigger names (close #5786) **NOTE:** If you have event triggers with names greater than 42 chars, then you should update their names to avoid running into Postgres identifier limit bug (#5786) - server: enable HASURA_GRAPHQL_PG_CONN_LIFETIME by default to reclaim memory @@ -2597,7 +2597,7 @@ arguments. - server: fix event trigger cleanup on deletion via replace_metadata (fix #5461) (#6137) **WARNING**: This can cause significant load on PG on startup if you have lots of event triggers. Delay in starting up is expected. - console: add notifications (#5070) -- cli: fix bug in metadata apply which made the server aquire some redundant and unnecessary locks (close #6115) +- cli: fix bug in metadata apply which made the server acquire some redundant and unnecessary locks (close #6115) - cli: fix cli-migrations-v2 image failing to run as a non root user (close #4651, close #5333) - cli: fix issue with cli binary on latest Mac (Big Sur) (fix #5462) - docs: add docs page on networking with docker (close #4346) (#4811) @@ -2869,7 +2869,7 @@ using this flag is insecure since verification is not carried out. - console: update graphiql explorer to support operation transform (#4567) - console: make GraphiQL Explorer taking the whole viewport (#4553) -- console: fix table columns type comparision during column edit (close #4125) (#4393) +- console: fix table columns type comparison during column edit (close #4125) (#4393) - cli: allow initialising project in current directory (fix #4560) #4566 - cli: remove irrelevant flags from init command (close #4508) (#4549) - docs: update migrations docs with config v2 (#4586) @@ -2898,7 +2898,7 @@ available for `admin` role requests. To enable this for other roles, start the s $ graphql-engine --database-url serve --dev-mode ``` -In case you want to disable `internal` field for `admin` role requests, set `--admin-internal-errors` option to `false` or or set `HASURA_GRAPHQL_ADMIN_INTERNAL_ERRORS` env variable to `false` +In case you want to disable `internal` field for `admin` role requests, set `--admin-internal-errors` option to `false` or set `HASURA_GRAPHQL_ADMIN_INTERNAL_ERRORS` env variable to `false` ```bash $ graphql-engine --database-url serve --admin-internal-errors false @@ -3035,7 +3035,7 @@ For example, see [here](https://hasura.io/docs/latest/graphql/core/api-reference - server: support inserting unquoted bigint, and throw an error if value overflows the bounds of the integer type (fix #576) (fix #4368) - console: change react ace editor theme to eclipse (close #4437) - console: fix columns reordering for relationship tables in data browser (#4483) -- console: format row count in data browser for readablity (#4433) +- console: format row count in data browser for readability (#4433) - console: move pre-release notification tooltip msg to top (#4433) - console: remove extra localPresets key present in migration files on permissions change (close #3976) (#4433) - console: make nullable and unique labels for columns clickable in insert and modify (#4433) @@ -3044,7 +3044,7 @@ For example, see [here](https://hasura.io/docs/latest/graphql/core/api-reference - docs: add API docs for using environment variables as webhook urls in event triggers - server: fix recreating action's permissions (close #4377) - server: make the graceful shutdown logic customizable (graceful shutdown on the SIGTERM signal continues to be the default) -- docs: add reference docs for CLI (clsoe #4327) (#4408) +- docs: add reference docs for CLI (close #4327) (#4408) ## `v1.2.0-beta.4` @@ -3080,7 +3080,7 @@ The order, collapsed state of columns and rows limit is now persisted across pag - server: reserved keywords in column references break parser (fix #3597) #3927 - server: fix postgres specific error message that exposed database type on invalid query parameters (#4294) - server: manage inflight events when HGE instance is gracefully shutdown (close #3548) -- server: fix an edge case where some events wouldn't be processed because of internal erorrs (#4213) +- server: fix an edge case where some events wouldn't be processed because of internal errors (#4213) - server: fix downgrade not working to version v1.1.1 (#4354) - server: `type` field is not required if `jwk_url` is provided in JWT config - server: add a new field `claims_namespace_path` which accepts a JSON Path for looking up hasura claim in the JWT token (#4349) diff --git a/architecture/sql-server.md b/architecture/sql-server.md index ecc1a83b9966c..56da0d3bfabbd 100644 --- a/architecture/sql-server.md +++ b/architecture/sql-server.md @@ -89,7 +89,7 @@ Once you have Hasura running: ### Replace direct database access with flexible JSON APIs over HTTP - Modern workloads in the cloud or end-user web/mobile apps need a flexible and secure API over HTTP to access data -- Use Hasura to to get a data API instantly, instead of having to build and maintain an API server +- Use Hasura to get a data API instantly, instead of having to build and maintain an API server ## How Hasura Works @@ -111,7 +111,7 @@ Because of its compiler-like architecture Hasura can avoid N+1 issues entirely. There are 3 key aspects to this: **Compiling into a SQL query:** -- Instead of resolving a GraphQL query piece by piece, Hasura tries to compile parts of the query to a single data source, into a single query that an upstread source can understand - in this case, a SQL query for SQL Server. +- Instead of resolving a GraphQL query piece by piece, Hasura tries to compile parts of the query to a single data source, into a single query that an upstream source can understand - in this case, a SQL query for SQL Server. - This is especially impactful when dealing with authorization rules since they can be compiled into the same query as the data fetch instead of having to make multiple requests to the data source. - Read more about Hasura’s [compiler architecture](https://hasura.io/blog/architecture-of-a-high-performance-graphql-to-sql-server-58d9944b8a87/) and its [subscription mechanism](https://github.com/hasura/graphql-engine/blob/master/architecture/live-queries.md). @@ -175,7 +175,7 @@ Here are a few examples of types of authorization rules that can be implemented #### Ex 4: Allow roles to inherit from other roles - Compose roles to intelligently merge into a single permission policy - Hasura composes roles preserving row and field level access using a "cell based nullification" merge algorithm -- This allows Hasura users to specifcy different levels of ownership and visibility on the same data models easily +- This allows Hasura users to specify different levels of ownership and visibility on the same data models easily - Example: Allow a user to fetch all the fields of their own profile, but only some data for profiles that aren't theirs ```yaml # Role 1 @@ -214,7 +214,7 @@ Next up on our roadmap for Hasura + SQL Server: - [#7073](https://github.com/hasura/graphql-engine/issues/7073) Support for stored procedures & functions - [#7074](https://github.com/hasura/graphql-engine/issues/7074) Mutations: Run inserts, updates, deletes, stored procedures and transactions securely on SQL Server over a GraphQL API -- [#7075]() Event triggers: Trigger HTTP webhooks with atomic capture and atleast once guarantee whenever data changes inside the database [(read more)](https://hasura.io/docs/latest/graphql/core/event-triggers/index.html) +- [#7075]() Event triggers: Trigger HTTP webhooks with atomic capture and at least once guarantee whenever data changes inside the database [(read more)](https://hasura.io/docs/latest/graphql/core/event-triggers/index.html) - [#7076](https://github.com/hasura/graphql-engine/issues/7076) Remote Joins: Join models in SQL Server to models from other API services (GraphQL or REST) Please do upvote / subscribe to the issues above to stay updated! We estimate these to be available over incremental releases in July/Aug 2021. diff --git a/architecture/streaming-subscriptions.md b/architecture/streaming-subscriptions.md index 9bf7bb157d9e3..5a4bb42376747 100644 --- a/architecture/streaming-subscriptions.md +++ b/architecture/streaming-subscriptions.md @@ -235,7 +235,7 @@ of subscriptions. We have seen how Hasura makes performant queries. But what about Authorization? -The naive approach would be to fetch the data from the database (involves IO), apply authorization checks for each element in the response (involes IO and compute) and then send the result back to the client. The problem with this approach is that the initial data fetch is not definitive. The data is being filtered based on rules post the fetching from the database and as data becomes bigger, compute and latency becomes high and so does the load on the database. +The naive approach would be to fetch the data from the database (involves IO), apply authorization checks for each element in the response (involves IO and compute) and then send the result back to the client. The problem with this approach is that the initial data fetch is not definitive. The data is being filtered based on rules post the fetching from the database and as data becomes bigger, compute and latency becomes high and so does the load on the database. It is impossible to load large streams of data in memory and apply authorization rules for filtering data before sending it back to the client. The Hasura approach to this problem is two fold. Make the data fetching performant and make authorization declarative by applying them at the query layer. diff --git a/community/boilerplates/auth-servers/passportjs-jwt-roles/README.md b/community/boilerplates/auth-servers/passportjs-jwt-roles/README.md index c8518ba872ea3..1eeb77af2cc50 100644 --- a/community/boilerplates/auth-servers/passportjs-jwt-roles/README.md +++ b/community/boilerplates/auth-servers/passportjs-jwt-roles/README.md @@ -68,7 +68,7 @@ TODO: document on how to deploy on docker. You can also have a look at [this docker-compose gist](https://gist.github.com/plmercereau/b8503c869ffa2b5d4e42dc9137b56ae1) to see how I use this service in a docker stack with Hasura and [Traefik](https://traefik.io/). -### Deploy locally (developpment) +### Deploy locally (development) ```bash # Clone the repo @@ -172,7 +172,7 @@ curl -H "Content-Type: application/json" \ http://localhost:8080/login ``` -It will then send back user information including the JWT in the same format as the above `/signup` endoint. +It will then send back user information including the JWT in the same format as the above `/signup` endpoint. You can use this boilerplate as a webhook server in using the `/webhook` endpoint to fetch a webhook token: diff --git a/community/boilerplates/event-triggers/aws-lambda/README.md b/community/boilerplates/event-triggers/aws-lambda/README.md index 19475684fbd24..c50772bfce531 100644 --- a/community/boilerplates/event-triggers/aws-lambda/README.md +++ b/community/boilerplates/event-triggers/aws-lambda/README.md @@ -5,7 +5,7 @@ Sample cloud functions that can be triggered on changes in the database using Gr These are organized in language-specific folders. **NOTE** -Some of the language/platforms are work in progress. We welcome contributions for the WIP langauages. See issues and the following checklist: +Some of the language/platforms are work in progress. We welcome contributions for the WIP languages. See issues and the following checklist: | Folder name | Use-case| Node.js(6) | Python | Java | Go | C# | Ruby |-------------|---------|:--------:|:------:|:----:|:---:|:---:|:---: diff --git a/community/boilerplates/event-triggers/aws-lambda/nodejs6/mutation/README.md b/community/boilerplates/event-triggers/aws-lambda/nodejs6/mutation/README.md index 9efbb3b0c38b5..8c1815a29c063 100644 --- a/community/boilerplates/event-triggers/aws-lambda/nodejs6/mutation/README.md +++ b/community/boilerplates/event-triggers/aws-lambda/nodejs6/mutation/README.md @@ -30,7 +30,7 @@ Create a lambda function in AWS. This will be our webhook. 6. Add API gateway as a trigger. 7. Add an API to API gateway. 8. Upload the zip from previous step. The handler function of your lambda will be `index.handler`. -9. Add the following enviroment variables in your lambda config: +9. Add the following environment variables in your lambda config: 1. `ADMIN_SECRET`: this is the admin secret key you configured when you setup HGE. 2. `HGE_ENDPOINT`: the URL on which you HGE instance is running. diff --git a/community/boilerplates/event-triggers/aws-lambda/nodejs8/mutation/README.md b/community/boilerplates/event-triggers/aws-lambda/nodejs8/mutation/README.md index 2025c2741417f..33c5a81056b67 100644 --- a/community/boilerplates/event-triggers/aws-lambda/nodejs8/mutation/README.md +++ b/community/boilerplates/event-triggers/aws-lambda/nodejs8/mutation/README.md @@ -30,7 +30,7 @@ Create a lambda function in AWS. This will be our webhook. 6. Add API gateway as a trigger. 7. Add an API to API gateway. 8. Upload the zip from previous step. The handler function of your lambda will be `index.handler`. -9. Add the following enviroment variables in your lambda config: +9. Add the following environment variables in your lambda config: 1. `ACCESS_KEY`: this is the access key you configured when you setup HGE. 2. `HGE_ENDPOINT`: the URL on which you HGE instance is running. diff --git a/community/boilerplates/event-triggers/aws-lambda/python/mutation/README.md b/community/boilerplates/event-triggers/aws-lambda/python/mutation/README.md index e6b5c4ea1f532..a396bc9852f5b 100644 --- a/community/boilerplates/event-triggers/aws-lambda/python/mutation/README.md +++ b/community/boilerplates/event-triggers/aws-lambda/python/mutation/README.md @@ -26,7 +26,7 @@ Create a lambda function in AWS. This will be our webhook. 4. Add API gateway as a trigger. 5. Add an API to API gateway. 6. Add the code in `mutation.py`. The handler function of your lambda will be the `mutation.lambda_handler`. -7. Add the following enviroment variables in your lambda config: +7. Add the following environment variables in your lambda config: 1. `ADMIN_SECRET`: this is the admin secret key you configured when you setup HGE. 2. `HGE_ENDPOINT`: the URL on which you HGE instance is running. diff --git a/community/boilerplates/event-triggers/aws-lambda/ruby/mutation/README.md b/community/boilerplates/event-triggers/aws-lambda/ruby/mutation/README.md index 6843164fd7b4c..763ed090be040 100644 --- a/community/boilerplates/event-triggers/aws-lambda/ruby/mutation/README.md +++ b/community/boilerplates/event-triggers/aws-lambda/ruby/mutation/README.md @@ -31,7 +31,7 @@ Create a lambda function in AWS. This will be our webhook. 1. Add API gateway as a trigger (in this example you can use Open as the security option). 1. Add an API to API gateway. 1. Add the code in `lambda_function.rb` to the lambda function editor. The handler function of your lambda will be the lambda_handler. -1. Add the following enviroment variables in your lambda config: +1. Add the following environment variables in your lambda config: 1. `ACCESS_KEY`: this is the access key you configured when you setup HGE (`HASURA_GRAPHQL_ADMIN_SECRET` env variable). 1. `HGE_ENDPOINT`: the URL on which you HGE instance is running. diff --git a/community/boilerplates/event-triggers/google-cloud-functions/README.md b/community/boilerplates/event-triggers/google-cloud-functions/README.md index 86aa111b8ac5b..6ff89005cc584 100644 --- a/community/boilerplates/event-triggers/google-cloud-functions/README.md +++ b/community/boilerplates/event-triggers/google-cloud-functions/README.md @@ -2,7 +2,7 @@ Sample cloud functions that can be triggered on changes in the database using GraphQL Engine's Event Triggers **NOTE** -Some of the language/platforms are work in progress. We welcome contributions for the WIP langauages. See issues and the following checklist: +Some of the language/platforms are work in progress. We welcome contributions for the WIP languages. See issues and the following checklist: | Folder name | Use-case| Node.js(8) | Node.js(6) | Python |-------------|---------|:--------:|:------:|:----: diff --git a/community/boilerplates/observability/enterprise/README.md b/community/boilerplates/observability/enterprise/README.md index d20a034281597..ba34121267ef8 100644 --- a/community/boilerplates/observability/enterprise/README.md +++ b/community/boilerplates/observability/enterprise/README.md @@ -45,7 +45,7 @@ The default GraphQL Engine service uses migration and metadata configuration in ## FAQs -**How can I enable metrics in the the Source Health panel** +**How can I enable metrics in the Source Health panel** > Currently, only Postgres supports source health check metrics. diff --git a/dc-agents/DOCUMENTATION.md b/dc-agents/DOCUMENTATION.md index 16d4ed732989e..64d1bfe5deb5f 100644 --- a/dc-agents/DOCUMENTATION.md +++ b/dc-agents/DOCUMENTATION.md @@ -198,7 +198,7 @@ If the agent only supports table columns that are always nullable, then it shoul ### Interpolated Queries -Interpolated queries are lists of strings and scalars that represent applied templates of analagous form to [`select * from users where id = `, 5]. +Interpolated queries are lists of strings and scalars that represent applied templates of analogous form to [`select * from users where id = `, 5]. By declaring support for the `interpolated_queries` capability the Hasura admin understands that they will be able to define native queries that leverage this cabability through the agent. @@ -221,7 +221,7 @@ All scalar types must also support the built-in comparison operators `eq`, `gt`, Aggregate functions can be defined by adding an `aggregate_functions` property to the scalar type capabilities object. The `aggregate_functions` property must be an object mapping aggregate function names to their result types. -Aggregate function names must be must be valid GraphQL names. +Aggregate function names must be valid GraphQL names. Result types must be valid scalar types. Update column operators are operators that can defined to allow custom mutation operations to be performed on columns of the particular scalar type. @@ -515,7 +515,7 @@ and here is the resulting query request payload: } ``` -The implementation of the service is responsible for intepreting this data structure and producing a JSON response body which is compatible with both the query and the schema. +The implementation of the service is responsible for interpreting this data structure and producing a JSON response body which is compatible with both the query and the schema. Let's break down the request: @@ -555,7 +555,7 @@ The rows returned by the query must be put into the `rows` property array in the There are three properties that are used to control pagination of queried data: -* `aggregates_limit`: The maximum number of rows to consider in aggregations calculated and returned in the `aggregrates` property. `aggregates_limit` does not influence the rows returned in the `rows` property. It will only be used if there are aggregates in the query. +* `aggregates_limit`: The maximum number of rows to consider in aggregations calculated and returned in the `aggregates` property. `aggregates_limit` does not influence the rows returned in the `rows` property. It will only be used if there are aggregates in the query. * `limit`: The maximum number of rows to return from a query in the `rows` property. `limit` does not influence the rows considered by aggregations. * `offset`: The index of the first row to return. This affects the rows returned, and also the rows considered by aggregations. @@ -735,7 +735,7 @@ Values (as used in `value` in `binary_op` and the `values` array in `binary_arr_ Columns (as used in `column` fields in `binary_op`, `binary_arr_op`, `unary_op` and in `column`-typed Values) are specified as a column `name`, a `column_type` to denote the scalar type of the column, as well as optionally a `path` to the table that contains the column. If the `path` property is missing/null or an empty array, then the column is on the current table. However, if the path is `["$"]`, then the column is on the table involved in the Query that the whole `where` expression is from. At this point in time, these are the only valid values of `path`. -Here is a simple example, which correponds to the predicate "`first_name` is John and `last_name` is Smith": +Here is a simple example, which corresponds to the predicate "`first_name` is John and `last_name` is Smith": ```json { @@ -3020,7 +3020,7 @@ GraphQL: ```graphql { - fibonacci(args: {upto: 9}) { + fibonacci(args: {up to: 9}) { ArtistId Name myself { @@ -3039,7 +3039,7 @@ Query Sent to Agent: "fibonacci" ], "function_arguments": { - "upto": 9, + "up to": 9, "__hasura_session": { "x-hasura-artist-name": "patricia", "x-hasura-role": "admin" diff --git a/dc-agents/sdk/README.md b/dc-agents/sdk/README.md index 1012db8f346a4..9baec6dc2bd26 100644 --- a/dc-agents/sdk/README.md +++ b/dc-agents/sdk/README.md @@ -30,7 +30,7 @@ The entire SDK is meant to serve as a template, so feel free to modify/remove an ### Making Changes to the Reference Agent -The Reference Agent source is available in `./reference` and the architecture is docuemented in `README_DATA_CONNECTORS.md`. +The Reference Agent source is available in `./reference` and the architecture is documented in `README_DATA_CONNECTORS.md`. To adapt the existing agent to your needs, you should probably first look at the `/capabilities` and `/schema` endpoints declared in `./reference/src/index.ts` - these then delegate implementation to `capabilities.ts` and `config.ts` respectively. A guide to following this modification process is currently in development. @@ -65,8 +65,8 @@ Note: If you are using a hybrid setup of native and docker services (especially While the reference agent should provide a fleshed out example of how an agent can be developed and what capabilities are possible, the following principles should also provide guidance how we recommend an agent be developed and structured: * Capabilities & Self Describing - Your agent should describe itself via the `capabilities` feature. -* Stateless - Your agent should be (transparently) stateless, each request carries all the infomation required -* Defer logic to backend - Your agent should endevour to offload processing to its backend if possible +* Stateless - Your agent should be (transparently) stateless, each request carries all the information required +* Defer logic to backend - Your agent should endeavour to offload processing to its backend if possible * Type-safe - Your agent should expect and return types as described in the OpenAPI schema * Backwards compatible - Your agent should preserve backwards compatibility as it evolves * Testing - Your agent should be tested with the provided test-suite diff --git a/dc-agents/sqlite/README.md b/dc-agents/sqlite/README.md index 68a7eb52197c1..4807f1a1755c8 100644 --- a/dc-agents/sqlite/README.md +++ b/dc-agents/sqlite/README.md @@ -64,7 +64,7 @@ Note: Boolean flags `{FLAG}` can be provided as `1`, `true`, `t`, `yes`, `y`, or | `DB_READONLY` | `{FLAG}` | `false` | Makes databases readonly. | | `DB_ALLOW_LIST` | `DB1[,DB2]*` | Any Allowed | Restrict what databases can be connected to. | | `DB_PRIVATECACHE` | `{FLAG}` | Shared | Keep caches between connections private. | -| `DEBUGGING_TAGS` | `{FLAG}` | `false` | Outputs xml style tags in query comments for deugging purposes. | +| `DEBUGGING_TAGS` | `{FLAG}` | `false` | Outputs xml style tags in query comments for debugging purposes. | | `PRETTY_PRINT_LOGS` | `{FLAG}` | `false` | Uses `pino-pretty` to pretty print request logs | | `LOG_LEVEL` | `fatal` \| `error` \| `info` \| `debug` \| `trace` \| `silent` | `info` | The minimum log level to output | | `METRICS` | `{FLAG}` | `false` | Enables a `/metrics` prometheus metrics endpoint. diff --git a/dc-agents/sqlite/test/TESTING.md b/dc-agents/sqlite/test/TESTING.md index 0560744a38f4c..2581172986135 100644 --- a/dc-agents/sqlite/test/TESTING.md +++ b/dc-agents/sqlite/test/TESTING.md @@ -23,9 +23,9 @@ docker compose up -d ![Screen Shot 2022-10-07 at 11 06 59 AM](https://user-images.githubusercontent.com/49927862/194598623-5dad962f-a1b0-4db6-9b97-66e71000e344.png) -6. Aftering adding the agent, click `Connect Database` and for the **Data Source Driver** choose `sqlite agent` from the dropdown menu. +6. After adding the agent, click `Connect Database` and for the **Data Source Driver** choose `sqlite agent` from the dropdown menu. 7. For **Database Display Name** type in `sqlite-test` and for **db** type in `/chinook.db` and click `Connect Database` ![Screen Shot 2022-10-07 at 11 16 34 AM](https://user-images.githubusercontent.com/49927862/194600350-8131459e-cd91-4ac8-9fcc-3d1b2e491a1f.png) -You should now have this new databse listed on the left: ![Screen Shot 2022-10-07 at 11 12 52 AM](https://user-images.githubusercontent.com/49927862/194599628-952d61e7-1ab8-4c25-8aa2-a9883b9fe6bb.png) +You should now have this new database listed on the left: ![Screen Shot 2022-10-07 at 11 12 52 AM](https://user-images.githubusercontent.com/49927862/194599628-952d61e7-1ab8-4c25-8aa2-a9883b9fe6bb.png) diff --git a/frontend/docs/tags.md b/frontend/docs/tags.md index aa2b34a593ed3..f8f55555a64ff 100644 --- a/frontend/docs/tags.md +++ b/frontend/docs/tags.md @@ -6,7 +6,7 @@ There is 3 tag groups : scope, type, meta. You should use the internal generators to create libraries and application, they will have selectors for the tags groups. -Next here, you can find all of the tags that we have, along side the explanation and the import rule matrix. +Next here, you can find all of the tags that we have, alongside the explanation and the import rule matrix. ## `scope` tag group @@ -34,7 +34,7 @@ Shared libraries, used across all other libraries Here is the import rule matrix : -| ![Can row import colum](./can-import-icon.png) | `scope:shared` | `scope:console` | `scope:nx-plugins` | +| ![Can row import column](./can-import-icon.png) | `scope:shared` | `scope:console` | `scope:nx-plugins` | | ---------------------------------------------- | :------------: | :-------------: | :----------------: | | `scope:shared` | ✅ | ⛔ | ⛔ | | `scope:console` | ✅ | ✅ | ⛔ | @@ -82,7 +82,7 @@ End to end tests projects Here is the import rule matrix : -| ![Can row import colum](./can-import-icon.png) | `type:utils` | `type:data` | `type:ui` | `type:feature` | `type:app` | `type:storybook` | `type:e2e` | +| ![Can row import column](./can-import-icon.png) | `type:utils` | `type:data` | `type:ui` | `type:feature` | `type:app` | `type:storybook` | `type:e2e` | | ---------------------------------------------- | :----------: | :---------: | :-------: | :------------: | :--------: | :--------------: | :--------: | | `type:utils` | ✅ | ⛔ | ⛔ | ⛔ | ⛔ | ⛔ | ⛔ | | `type:data` | ✅ | ✅ | ⛔ | ⛔ | ⛔ | ⛔ | ⛔ | @@ -94,7 +94,7 @@ Here is the import rule matrix : ## `meta` tag group -A meta tags is used to add artifical boundaries when needed +A meta tags is used to add artificial boundaries when needed ### Tag list : @@ -114,7 +114,7 @@ A package is a library published to NPM. And it can only depends on publishable Here is the import rule matrix : -| ![Can row import colum](./can-import-icon.png) | `meta:legacy` | `meta:package` | +| ![Can row import column](./can-import-icon.png) | `meta:legacy` | `meta:package` | | ---------------------------------------------- | :-----------: | :------------: | | `meta:legacy` | ✅ | ⛔ | | `meta:package` | ⛔ | ✅ | diff --git a/frontend/libs/console/legacy-ce/src/lib/features/ControlPlane/README.md b/frontend/libs/console/legacy-ce/src/lib/features/ControlPlane/README.md index debe56ec9fca7..d04fb7d6fcb3e 100644 --- a/frontend/libs/console/legacy-ce/src/lib/features/ControlPlane/README.md +++ b/frontend/libs/console/legacy-ce/src/lib/features/ControlPlane/README.md @@ -5,7 +5,7 @@ those reflect the current localdev environment. Please update it accordingly if ## Important note -The `InputMaybe` type is not generated by the codegen tool becuase of issues in the version that we use. +The `InputMaybe` type is not generated by the codegen tool because of issues in the version that we use. For now, ensure you add this line at the top of the file whenever you regenerate the types: Reference to the issue: https://github.com/dotansimha/graphql-code-generator/issues/7774 diff --git a/frontend/libs/console/legacy-ce/src/lib/features/IsFeatureEnabled/README.md b/frontend/libs/console/legacy-ce/src/lib/features/IsFeatureEnabled/README.md index 8e2d214416e73..e7e7cb94e907e 100644 --- a/frontend/libs/console/legacy-ce/src/lib/features/IsFeatureEnabled/README.md +++ b/frontend/libs/console/legacy-ce/src/lib/features/IsFeatureEnabled/README.md @@ -152,7 +152,7 @@ React APIs includes "reactivity" by definition. Vanilla JavaScript APIs cannot o _What about getting the APIs accepting an array of features instead of just one?_ -We wil evaluate if the feature is really needed because the TypeScript gymnastics needed for the `doMatch`/`doNotMatch` objects returned by the APIs are not trivial, and making them working with arrays is even worse. +We will evaluate if the feature is really needed because the TypeScript gymnastics needed for the `doMatch`/`doNotMatch` objects returned by the APIs are not trivial, and making them working with arrays is even worse. _What about specifying only the required properties in the compatibility object?_ diff --git a/frontend/libs/console/legacy-ce/src/lib/features/SchemaRegistry/README.md b/frontend/libs/console/legacy-ce/src/lib/features/SchemaRegistry/README.md index 1c8c566d3eb82..69acf2e9af0e3 100644 --- a/frontend/libs/console/legacy-ce/src/lib/features/SchemaRegistry/README.md +++ b/frontend/libs/console/legacy-ce/src/lib/features/SchemaRegistry/README.md @@ -4,7 +4,7 @@ Note: The command uses the config file `frontend/schema-registry-graphql-codegen ## Important note -The `InputMaybe` type is not generated by the codegen tool becuase of issues in the version that we use. +The `InputMaybe` type is not generated by the codegen tool because of issues in the version that we use. For now, ensure you add this line at the top of the file whenever you regenerate the types: Reference to the issue: https://github.com/dotansimha/graphql-code-generator/issues/7774 diff --git a/frontend/libs/console/legacy-ce/src/lib/features/components/README.md b/frontend/libs/console/legacy-ce/src/lib/features/components/README.md index f7d7034ad71a6..7d080fa72efd5 100644 --- a/frontend/libs/console/legacy-ce/src/lib/features/components/README.md +++ b/frontend/libs/console/legacy-ce/src/lib/features/components/README.md @@ -1,3 +1,3 @@ -# Why is this direcotory here? +# Why is this directory here? This is a place for components that are not atomic and therefore do not belong in `new-components` but are common across many features. diff --git a/frontend/libs/open-api-to-graphql/docs/subscriptions.md b/frontend/libs/open-api-to-graphql/docs/subscriptions.md index a290171dc02e4..ccbca343de680 100644 --- a/frontend/libs/open-api-to-graphql/docs/subscriptions.md +++ b/frontend/libs/open-api-to-graphql/docs/subscriptions.md @@ -172,7 +172,7 @@ startServer(); ## GrapQL client -If any GraphQL (WS) client subscribed to the route defined by the callback (`#/components/callbacks/DevicesEvent`), it will get the content transfered by PubSub. +If any GraphQL (WS) client subscribed to the route defined by the callback (`#/components/callbacks/DevicesEvent`), it will get the content transferred by PubSub. ```javascript import axios from 'axios' diff --git a/install-manifests/azure-container-with-pg/README.md b/install-manifests/azure-container-with-pg/README.md index fe17e96d450e6..8b6da836e694c 100644 --- a/install-manifests/azure-container-with-pg/README.md +++ b/install-manifests/azure-container-with-pg/README.md @@ -1,6 +1,6 @@ # Hasura GraphQL Engine on Azure -_This manifest is about provisioning Hasura with a new database. If you're looking for a manifest that provisions Hasura to use with an exising Postgres databse, checkout [`../azure-container`](../azure-container) directory._ +_This manifest is about provisioning Hasura with a new database. If you're looking for a manifest that provisions Hasura to use with an exising Postgres database, checkout [`../azure-container`](../azure-container) directory._ Click the button below to create a Hasura GraphQL Engine container on [Azure Container diff --git a/preload-mimalloc/mimalloc/readme.md b/preload-mimalloc/mimalloc/readme.md index 003cd8cf7a4de..2aeef527e058f 100644 --- a/preload-mimalloc/mimalloc/readme.md +++ b/preload-mimalloc/mimalloc/readme.md @@ -54,7 +54,7 @@ It also includes a robust way to override the default allocator in [Windows](#ov - __first-class heaps__: efficiently create and use multiple heaps to allocate across different regions. A heap can be destroyed at once instead of deallocating each object separately. - __bounded__: it does not suffer from _blowup_ \[1\], has bounded worst-case allocation - times (_wcat_) (upto OS primitives), bounded space overhead (~0.2% meta-data, with low + times (_wcat_) (up to OS primitives), bounded space overhead (~0.2% meta-data, with low internal fragmentation), and has no internal points of contention using only atomic operations. - __fast__: In our benchmarks (see [below](#performance)), _mimalloc_ outperforms other leading allocators (_jemalloc_, _tcmalloc_, _Hoard_, etc), @@ -85,14 +85,14 @@ Note: the `v2.x` version has a new algorithm for managing internal mimalloc page abstraction layer to make it easier to port and separate platform dependent code (in `src/prim`). Fixed C++ STL compilation on older Microsoft C++ compilers, and various small bug fixes. * 2022-12-23, `v1.7.9`, `v2.0.9`: Supports building with [asan](#asan) and improved [Valgrind](#valgrind) support. - Support abitrary large alignments (in particular for `std::pmr` pools). + Support arbitrary large alignments (in particular for `std::pmr` pools). Added C++ STL allocators attached to a specific heap (thanks @vmarkovtsev). Heap walks now visit all object (including huge objects). Support Windows nano server containers (by Johannes Schindelin,@dscho). Various small bug fixes. * 2022-11-03, `v1.7.7`, `v2.0.7`: Initial support for [Valgrind](#valgrind) for leak testing and heap block overflow detection. Initial - support for attaching heaps to a speficic memory area (only in v2). Fix `realloc` behavior for zero size blocks, remove restriction to integral multiple of the alignment in `alloc_align`, improved aligned allocation performance, reduced contention with many threads on few processors (thank you @dposluns!), vs2022 support, support `pkg-config`, . + support for attaching heaps to a specific memory area (only in v2). Fix `realloc` behavior for zero size blocks, remove restriction to integral multiple of the alignment in `alloc_align`, improved aligned allocation performance, reduced contention with many threads on few processors (thank you @dposluns!), vs2022 support, support `pkg-config`, . * 2022-04-14, `v1.7.6`, `v2.0.6`: fix fallback path for aligned OS allocation on Windows, improve Windows aligned allocation even when compiling with older SDK's, fix dynamic overriding on macOS Monterey, fix MSVC C++ dynamic overriding, fix @@ -232,7 +232,7 @@ target_link_libraries(myapp PUBLIC mimalloc-static) to link with the static library. See `test\CMakeLists.txt` for an example. For best performance in C++ programs, it is also recommended to override the -global `new` and `delete` operators. For convience, mimalloc provides +global `new` and `delete` operators. For convince, mimalloc provides [`mimalloc-new-delete.h`](https://github.com/microsoft/mimalloc/blob/master/include/mimalloc-new-delete.h) which does this for you -- just include it in a single(!) source file in your project. In C++, mimalloc also provides the `mi_stl_allocator` struct which implements the `std::allocator` interface. @@ -338,7 +338,7 @@ to make mimalloc more robust against exploits. In particular: - All free list pointers are [encoded](https://github.com/microsoft/mimalloc/blob/783e3377f79ee82af43a0793910a9f2d01ac7863/include/mimalloc-internal.h#L396) with per-page keys which is used both to prevent overwrites with a known pointer, as well as to detect heap corruption. -- Double free's are detected (and ignored). +- Double frees are detected (and ignored). - The free lists are initialized in a random order and allocation randomly chooses between extension and reuse within a page to mitigate against attacks that rely on a predicable allocation order. Similarly, the larger heap blocks allocated by mimalloc from the OS are also address randomized. @@ -351,7 +351,7 @@ When _mimalloc_ is built using debug mode, various checks are done at runtime to - Statistics are maintained in detail for each object size. They can be shown using `MIMALLOC_SHOW_STATS=1` at runtime. - All objects have padding at the end to detect (byte precise) heap block overflows. -- Double free's, and freeing invalid heap pointers are detected. +- Double frees, and freeing invalid heap pointers are detected. - Corrupted free-lists and some forms of use-after-free are detected. diff --git a/remote-schemas.md b/remote-schemas.md index 2c96e1dc0c952..9a2a9cc195b69 100644 --- a/remote-schemas.md +++ b/remote-schemas.md @@ -12,7 +12,7 @@ Remote schemas are ideal for use cases such as: To support custom business logic, you'll need to create a custom GraphQL server (see [boilerplates](community/boilerplates/remote-schemas)) and merge its schema with GraphQL Engine's. -![remote schems architecture](assets/remote-schemas-arch.png) +![remote schemes architecture](assets/remote-schemas-arch.png) ## Demo (*40 seconds*) diff --git a/rfcs/apollo-federation.md b/rfcs/apollo-federation.md index 94b1cb1623837..a4f0ed0b2f7f8 100644 --- a/rfcs/apollo-federation.md +++ b/rfcs/apollo-federation.md @@ -551,7 +551,7 @@ database, which might be something that can be optimised in further iterations. There are two main functions from the perspective of implementation: 1. `generateSDL`: This function generates the SDL of a given schema. It uses -the schema introspection inorder to build the SDL. The schema introspection is +the schema introspection in order to build the SDL. The schema introspection is generated while building the parsers. The type definition of the function is: ```haskell generateSDL :: G.SchemaIntrospection -> Text diff --git a/rfcs/column-mutability.md b/rfcs/column-mutability.md index eccc4062e665d..4ab9b5726ceac 100644 --- a/rfcs/column-mutability.md +++ b/rfcs/column-mutability.md @@ -50,7 +50,7 @@ or not the code was amended with the (`class Backend b` type-level) feature flag [^2]: MSSQL would not support upserts, as a concession for us missing an implementation that would distinguish between what columns to insert and what - columns to update owning to `_on_conflict` being part of `insert_` mutations. + columns to update owing to `_on_conflict` being part of `insert_` mutations. Then came [issue #7557](https://github.com/hasura/graphql-engine/issues/7557), and we realised the importance of distinguishing between `GENERATED BY DEFAULT AS @@ -66,7 +66,7 @@ To summarize, it seems our approach to modelling identity columns was inappropriate: We introduced a single _Identity Column_ modelling concept which had to cover for the various idiosyncratic variants between (and within!) databases, and as a result the shared schema generation code needed to know -about different backends' idiosyncracies. +about different backends' idiosyncrasies. ### Why is it important? diff --git a/rfcs/computed-fields-filters-perms-orderby.md b/rfcs/computed-fields-filters-perms-orderby.md index f2f8b092eb7a2..cadf2ee247d2d 100644 --- a/rfcs/computed-fields-filters-perms-orderby.md +++ b/rfcs/computed-fields-filters-perms-orderby.md @@ -68,7 +68,7 @@ AS $function$ $function$ ``` -I should able to fetch an author whose `full_name` is 'Bob Morley' +I should be able to fetch an author whose `full_name` is 'Bob Morley' ```graphql query { @@ -153,7 +153,7 @@ in the permission metadata definition Reference OSS ticket: https://github.com/hasura/graphql-engine/issues/7103 -Enable using computed fields in order by expresssion. For example, fetch authors ordering by `full_name` +Enable using computed fields in order by expression. For example, fetch authors ordering by `full_name` ```graphql query { diff --git a/rfcs/disable-query-and-subscription-root-fields.md b/rfcs/disable-query-and-subscription-root-fields.md index 1e807ed9efa8a..1a6d3412ee7ec 100644 --- a/rfcs/disable-query-and-subscription-root-fields.md +++ b/rfcs/disable-query-and-subscription-root-fields.md @@ -36,7 +36,7 @@ which has columns (message_id, reaciton_name, user_id). The permission on {"message": {"channel": {"workspace": {"members": {"user_id": "x-hasura-user-id"}}}}} ``` -As we go down the chain, our permissions gets more and more nested, refering to +As we go down the chain, our permissions gets more and more nested, referring to the permissions of the parent tables and beyond a point can get quite cumbersome. Let's say in our application we **never** need to access `message_reactions` table directly and is always accessed through `reactions` diff --git a/rfcs/identity-columns.md b/rfcs/identity-columns.md index 7373383f3b039..fe97a7ebcfb09 100644 --- a/rfcs/identity-columns.md +++ b/rfcs/identity-columns.md @@ -86,7 +86,7 @@ of _Identity Columns_ and inform the implementation of GraphQL Engine. * Part of the SQL standard. * Motivation is to standardise DB-supplied identifiers (i.e. autoincrement/serial/..) - * Note: This is a concept distinct from primary keys. Identity Columnss don't introduce + * Note: This is a concept distinct from primary keys. Identity Columns don't introduce uniqueness constraints by themselves! * Also provide better semantics than naive auto-increment/serial solutions, by prohibiting updating and inserting of Identity Columns (to an extent), in order to @@ -115,7 +115,7 @@ In a sentence: * Syntax closer to SQL standard: `column GENERATED BY DEFAULT AS IDENTITY`, `column GENERATED ALWAYS AS IDENTITY`. * Implemented on top of `series`. -* Columns `GENERATED BY DEFAULT` may be both `INSERT`ed and and `UPDATE`d. +* Columns `GENERATED BY DEFAULT` may be both `INSERT`ed and `UPDATE`d. * Columns `GENERATED ALWAYS` may be `INSERT`ed (guarded by an `OVERRIDE SYSTEM VALUE` keyword), but never `UPDATE`d. diff --git a/rfcs/input-validations.md b/rfcs/input-validations.md index 8c51cf3dc9a1c..59fcf995b3d4c 100644 --- a/rfcs/input-validations.md +++ b/rfcs/input-validations.md @@ -179,7 +179,7 @@ validation. 200 OK ``` -2. Unsucessful Response +2. Unsuccessful Response The HTTP validation URL should return a optional JSON object with `400` status code to represent failed validation. The object should contain `message` field diff --git a/rfcs/limit-over-join-optimization.md b/rfcs/limit-over-join-optimization.md index 8ce709b82e784..457c1b313976a 100644 --- a/rfcs/limit-over-join-optimization.md +++ b/rfcs/limit-over-join-optimization.md @@ -28,7 +28,7 @@ SELECT * Since join is an expensive operation, it would be useful if we could limit the number of rows it needs to process before running the join. -In SQL, trying to push limits down into each side of the join is *not* a semantic perserving operation. This is because the relationship between the two sides is unspecified, and could be one-to-one, many-to-one, or many-to-many. +In SQL, trying to push limits down into each side of the join is *not* a semantic preserving operation. This is because the relationship between the two sides is unspecified, and could be one-to-one, many-to-one, or many-to-many. For example, in a database of users and streaming providers, a user could be subscribed to multiple providers, and streaming providers provide services to multiple users. trying to get all users and their providers, limit by 10, is different than: diff --git a/rfcs/mssql-update-mutations.md b/rfcs/mssql-update-mutations.md index 608a3d11a1a9f..97ef336e1ea11 100644 --- a/rfcs/mssql-update-mutations.md +++ b/rfcs/mssql-update-mutations.md @@ -21,7 +21,7 @@ Initially we just wanted to support update mutations on MSSQL in the same way that we do on Postgres. However, there are some differences between the two database systems that -warrent closer inspection. +warrant closer inspection. ### Problem diff --git a/rfcs/mssql-upsert-mutations.md b/rfcs/mssql-upsert-mutations.md index 09a0d3903824d..ef9be1444522a 100644 --- a/rfcs/mssql-upsert-mutations.md +++ b/rfcs/mssql-upsert-mutations.md @@ -201,7 +201,7 @@ Generate schema for insert mutations with an `if_matched` clause, with permissio 1. update the `AnnInsert` IR: update `insertIntoTable` and `insertOneIntoTable` (at least) to use the newly introduced `BackendSchema` methods instead of `mkConflictArg`. -1. optionally, update `objectRelationshipInput` and `arrayRelationshipInput` to also use the `BackendSchema` field parser; however, this *may* not be neccesary as nested inserts are not yet supported on SQL Server. +1. optionally, update `objectRelationshipInput` and `arrayRelationshipInput` to also use the `BackendSchema` field parser; however, this *may* not be necessary as nested inserts are not yet supported on SQL Server. *To verify:* The generated schema can be verified locally in Hasura Console's Documentation Explorer. This change, if successful, should result in the following generated schema diff for an example `author` table: ```diff diff --git a/rfcs/mutations-mssql.md b/rfcs/mutations-mssql.md index 992629d5a4677..c1088e9839b97 100644 --- a/rfcs/mutations-mssql.md +++ b/rfcs/mutations-mssql.md @@ -22,13 +22,13 @@ Generation and execution of GraphQL insert, delete and update mutations for MSSQ ### Success criteria Taking reference to [Postgres mutations](https://hasura.io/docs/latest/graphql/core/api-reference/graphql-api/mutation.html#graphql-api-mutation) -we should able to generate schema and execute mutations for MSSQL backend. +we should be able to generate schema and execute mutations for MSSQL backend. For example, let's say a table with name `author` is tracked from a MSSQL server backend. Considering insert mutations, **Schema Generation**: -The server should able to generate following GraphQL schema +The server should be able to generate following GraphQL schema ```graphql @@ -48,7 +48,7 @@ type author_mutation_response { ``` **Query Execution**: -The server should able to execute following sample GraphQL mutation +The server should be able to execute following sample GraphQL mutation ```graphql @@ -64,7 +64,7 @@ mutation { ``` **Permissions**: -Users should able to define row-level and column-level permissions for inserts via Metadata API +Users should be able to define row-level and column-level permissions for inserts via Metadata API or Console UI ### How @@ -101,7 +101,7 @@ WITH some_alias AS (SELECT * FROM #temp_table) SELECT (SELECT * FROM some_alias FOR JSON PATH, INCLUDE_NULL_VALUES) AS [returning], count(*) AS [affected_rows] FROM some_alias FOR JSON PATH, WITHOUT_ARRAY_WRAPPER; ``` -For **tables without primary key**, we choose **not** to generate mutations schema atleast in the initial iterations. +For **tables without primary key**, we choose **not** to generate mutations schema at least in the initial iterations. #### Permissions diff --git a/rfcs/mysql-relationships.md b/rfcs/mysql-relationships.md index 6dae105dafb12..4fcc9cbb65f30 100644 --- a/rfcs/mysql-relationships.md +++ b/rfcs/mysql-relationships.md @@ -121,7 +121,7 @@ field: ``` query { - autors { + authors { articles_aggregate { aggregate { count diff --git a/rfcs/openapi-to-hasura-single-action.md b/rfcs/openapi-to-hasura-single-action.md index cd990910eeffa..f09a8551d3814 100644 --- a/rfcs/openapi-to-hasura-single-action.md +++ b/rfcs/openapi-to-hasura-single-action.md @@ -64,7 +64,7 @@ The process of generating Graphql types is the following: 1. Translate the OpenAPI spec into a GraphQL schema using openapi-to-graphql 2. Removing from the schema all the operations but the selected one using microfiber print the resulting schema -3. dividing action definition (everything withing type Query {} or type Mutation {} ) from type definition (anything else) +3. dividing action definition (everything within type Query {} or type Mutation {} ) from type definition (anything else) 4. We should note that we cannot translate all operations in the openapi specification to GraphQL. In this case, the openapi-to-graphql librate will exclude these operations, which will not be among those selectable for translation. The schema generated by openapi-to-graphql, after point 2, is the following: @@ -135,7 +135,7 @@ the remaining are the types used by the action. We can extract them by removing ## Deal with translation errors -While `openapi-to-graphql` sometimes fails, we could try a best-effort approach to fix those errors in the original specification and then translate it again. The proposal approach is to make some midifications to the OpenAPI specification before translating it to GraphQL. We can have two possible outcomes +While `openapi-to-graphql` sometimes fails, we could try a best-effort approach to fix those errors in the original specification and then translate it again. The proposal approach is to make some modifications to the OpenAPI specification before translating it to GraphQL. We can have two possible outcomes - The action is modified in such a way that it can be translated to GraphQL. In this case, we can generate the action. - The action is discarded so the other actions can be translated. @@ -162,15 +162,15 @@ We can derive all the Hasura action configurations by the openapi-to-graphql met - the **operation URL** is the path of the operation in the metadata - the **path parameters** of the URL is the list of arguments that are marked as path parameters in the metadata - the parameters to pass in the **query string**: is the list of arguments that are marked as query parameters in the metadata -### Request and response transormation +### Request and response transformation If there were a one-to-one relationship between REST and GraphQL types, there would be no need for any request or response transformation. But, as is stated in the IBM article, to generate GrahpQL types, some names could be sanitized and hence be different from the REST ones. This could lead to broken Hasura action calls. To solve this problem, a layer of request and response transformation is needed to perform the translation of types between the REST and GraphQL worlds. -While in the article this is is done in the generated resolvers, in Hasura action kriti templates must be generated by recursively traversing the GraphQL schema and the OpenAPI specification and used as request and response transformation. +While in the article this is done in the generated resolvers, in Hasura action kriti templates must be generated by recursively traversing the GraphQL schema and the OpenAPI specification and used as request and response transformation. -This is an example of `PetInput` request and response kriti transformation. We artificially renamed the `name` field to in OpenAPI spefication to `$name` to simulate the incompatibility. +This is an example of `PetInput` request and response kriti transformation. We artificially renamed the `name` field to in OpenAPI specification to `$name` to simulate the incompatibility. ```json { diff --git a/rfcs/permissions-mysql.md b/rfcs/permissions-mysql.md index fea02689de865..a3f3cb49d372f 100644 --- a/rfcs/permissions-mysql.md +++ b/rfcs/permissions-mysql.md @@ -20,7 +20,7 @@ The role-based access control feature, often referred to simply as "Permissions" allows Hasura users to restrict what data is returned by queries and admitted by mutations. Several flavors of permissions exist: -**_Column Permissions_** censor the columns that cliens in a given role have access +**_Column Permissions_** censor the columns that clients in a given role have access to (either in Queries or Mutations), by means of an explicit list of columns exposed. @@ -68,7 +68,7 @@ datasets they permit we do not necessarily end up with a rectangular dataset: ![Inherited roles diagram](permissions-mysql/Inherited%20roles%20permissions.png) Our data universe however only permits "rectangular" data. In order to -accomodate the complexity resulting from _Inherited Roles_ we make columns that +accommodate the complexity resulting from _Inherited Roles_ we make columns that are particular to a single parent role nullable. For example, in the diagram above we would return `null` for (`Row 5`, `Column B`) and (`Row 2`, `Column D`). @@ -165,7 +165,7 @@ table actually targeted by the mutation. ## Future This document is a product of its time, brought into existence by the -contemporary need to elaborate on how permissons work because the development +contemporary need to elaborate on how permissions work because the development work on MySQL needs to incorporate them. An insight resulting from discussing this subject is that it would be more @@ -178,7 +178,7 @@ we need to also talk about what they apply to. As such it makes for a more elegant exposition to talk about permissons as associated aspects of the subject they act on. -It it therefore expected that this document be superseded by dedicated RFCs on +It is therefore expected that this document be superseded by dedicated RFCs on the subjects of _Queries_, _Mutations_. ## Questions diff --git a/rfcs/rest-openapi-integration.md b/rfcs/rest-openapi-integration.md index 6eae672cc9f22..ad03b650590bf 100644 --- a/rfcs/rest-openapi-integration.md +++ b/rfcs/rest-openapi-integration.md @@ -141,5 +141,5 @@ The query parser that is being used resolves the query, a subsequent method stat # Future Work/ Open Ended Questions -* How to handle authentication in openAPI interative UI? +* How to handle authentication in openAPI interactive UI? * Should we expose different information based on the role of the user? diff --git a/rfcs/seed-data.md b/rfcs/seed-data.md index 87d128d015b37..19df8a48c256c 100644 --- a/rfcs/seed-data.md +++ b/rfcs/seed-data.md @@ -12,7 +12,7 @@ A PR [#3614](https://github.com/hasura/graphql-engine/pull/3614) has been submit #### Approach 2: Delegate adding seed data completely to the user -A user can use whatever interface they want to communicate with the database, let it be GraphQL mutations or any ORM. From this [comment](https://github.com/hasura/graphql-engine/issues/2431#issuecomment-566033630) it is evident that atleast some users have this in mind. +A user can use whatever interface they want to communicate with the database, let it be GraphQL mutations or any ORM. From this [comment](https://github.com/hasura/graphql-engine/issues/2431#issuecomment-566033630) it is evident that at least some users have this in mind. In this approach, everything is left to the user, from connecting to the underlying database to writing and managing seeds. diff --git a/rfcs/source-customization.md b/rfcs/source-customization.md index 38bc60e944fcb..0f9b384ac8a71 100644 --- a/rfcs/source-customization.md +++ b/rfcs/source-customization.md @@ -69,7 +69,7 @@ type MkRootFieldName = Name -> Name withRootFieldNameCustomization :: forall m r a. (MonadReader r m, Has MkRootFieldName r) => MkRootFieldName -> m a -> m a ``` -and add `Has MkRootFieldName r` constraint to `MonadBuildSchema` to allow us to apss the root field name customization +and add `Has MkRootFieldName r` constraint to `MonadBuildSchema` to allow us to pass the root field name customization function through to places where root field names are constructed. ### Namespaces @@ -91,7 +91,7 @@ type RootFieldMap = InsOrdHashMap RootFieldAlias ### Subscriptions -JSON result objects for subscriptions are generated by the individual database backends (currently Postgres and MSSQL support subcriptions). +JSON result objects for subscriptions are generated by the individual database backends (currently Postgres and MSSQL support subscriptions). If the subscription source has a namespace then we need to wrap the response from the database in an object with the namespace field before sending it to the websocket. diff --git a/rfcs/transforms.md b/rfcs/transforms.md index dfa4a91f433b0..b98a0773f5f2f 100644 --- a/rfcs/transforms.md +++ b/rfcs/transforms.md @@ -81,7 +81,7 @@ $url/{{.event.author_id}} } ``` -For loops will be declared with the `range` identifier and a template fragment referrencing the array to be looped over. The current array element is brought in scope be accessed as `$`. +For loops will be declared with the `range` identifier and a template fragment referencing the array to be looped over. The current array element is brought in scope be accessed as `$`. We will also support enumeration of array elements with a special syntax: ``` @@ -266,7 +266,7 @@ transforms: request_content_type: x-www-form-urlencoded ``` -This transforms the body to `key1={{.value1}}&key2={{.value2}}&key3={{$session.x-hasura-user-id}}`. The console can show the output for a given tranformation hence making it clear what is finally going in the body. +This transforms the body to `key1={{.value1}}&key2={{.value2}}&key3={{$session.x-hasura-user-id}}`. The console can show the output for a given transformation hence making it clear what is finally going in the body. ## Real-world example diff --git a/rfcs/update-permission-check-condition.md b/rfcs/update-permission-check-condition.md index 3da50485603a3..4fd5e3537718a 100644 --- a/rfcs/update-permission-check-condition.md +++ b/rfcs/update-permission-check-condition.md @@ -106,7 +106,7 @@ updated row holds the condition specified with `check`. 1. It may not make sense to allow inserts, but a check condition on update needs to be specified. - 2. The check conditions maybe different for both insert and update permisisons. + 2. The check conditions maybe different for both insert and update permissions. ### Implementation diff --git a/rfcs/v3-descriptions.md b/rfcs/v3-descriptions.md index acf112e5a647b..3eb7cb7ac4e6f 100644 --- a/rfcs/v3-descriptions.md +++ b/rfcs/v3-descriptions.md @@ -103,7 +103,7 @@ type Author { ### `Model` metadata object -A model can have three diffrent types of descriptions, the number of descriptions correspond +A model can have three different types of descriptions, the number of descriptions correspond to the number of GraphQL APIs that are chosen to expose. At the moment of writing this document, two types of GraphQL APIs are supported, `select_many` and `select_one`. diff --git a/rfcs/warning-in-replace-metadata-API.md b/rfcs/warning-in-replace-metadata-API.md index 19720899bdff8..8e580b798cde6 100644 --- a/rfcs/warning-in-replace-metadata-API.md +++ b/rfcs/warning-in-replace-metadata-API.md @@ -35,7 +35,7 @@ following for successful calls (with warnings): } ``` -In case of successfull API call (without warning), the API response remains +In case of successful API call (without warning), the API response remains unchanged. i.e. ```json { @@ -81,7 +81,7 @@ We should also include a strict-mode (`fail_on_warning: true`). In strict-mode, - For unsuccessful API calls, should we show warnings and error separately? Or should we include warnings as error? -## Possible implemetation using `MonadWriter` +## Possible implementation using `MonadWriter` Writer monad allows us to accumulate some logs (or warnings in our case) while performing actions applicatively. diff --git a/server/COMPILING-ON-MACOS.md b/server/COMPILING-ON-MACOS.md index 27e42f79681ac..997981b58992b 100644 --- a/server/COMPILING-ON-MACOS.md +++ b/server/COMPILING-ON-MACOS.md @@ -91,7 +91,7 @@ If you are re-running this command to update your Mac, you may need to run ln -s cabal/dev-sh.project.local cabal.project.local ``` - (Copying and pasting allows you to add local projects overrides, which may be needed if you are are planning to make changes to the graphql-engine code, but is not required for simply compiling the code as-is). + (Copying and pasting allows you to add local projects overrides, which may be needed if you are planning to make changes to the graphql-engine code, but is not required for simply compiling the code as-is). 6. Write the version number of the graphql-server that you are intending to build to the file `server/CURRENT_VERSION`. For example if you are building `v2.13.0` then you can run the following command: diff --git a/server/CONTRIBUTING.md b/server/CONTRIBUTING.md index 6cd6058afabaa..e495f06125031 100644 --- a/server/CONTRIBUTING.md +++ b/server/CONTRIBUTING.md @@ -79,7 +79,7 @@ If you are on MacOS, or experiencing any errors related to missing dependencies ### IDE Support -You may want to use [hls](https://github.com/haskell/haskell-language-server)/[ghcide](https://github.com/haskell/ghcide) if your editor has LSP support. A sample configuration has been provided which can be used as follows: +You may want to use [hsl](https://github.com/haskell/haskell-language-server)/[ghcide](https://github.com/haskell/ghcide) if your editor has LSP support. A sample configuration has been provided which can be used as follows: ``` ln -s sample.hie.yaml hie.yaml diff --git a/server/benchmarks/README.md b/server/benchmarks/README.md index 54322eb3f431c..05a09d0400d2a 100644 --- a/server/benchmarks/README.md +++ b/server/benchmarks/README.md @@ -68,7 +68,7 @@ be ignored. are very important; the 90th percentile may be a better target to optimize for than the median (50th percentile) -- ...but keep in mind that **longtail latencies are by definition noisey**: the +- ...but keep in mind that **longtail latencies are by definition noisy**: the 99.9th percentile may represent only a handful of samples. Therefore be cautious when drawing inferences from a _comparison_ of tail latencies between versions. diff --git a/server/documentation/deep-dives/migration-guidelines.md b/server/documentation/deep-dives/migration-guidelines.md index 701e4ef506187..2ccedd5ee08a1 100644 --- a/server/documentation/deep-dives/migration-guidelines.md +++ b/server/documentation/deep-dives/migration-guidelines.md @@ -29,7 +29,7 @@ ``` This is because in the former way, the `ALTER TABLE` will acquire an lock on the `hdb_catalog.event_log` table - and will keep the lock until all the rows are transformed to the new data type. The latter method is preffered although + and will keep the lock until all the rows are transformed to the new data type. The latter method is preferred although it does 4 queries, because all the DDL statements do not depend on the data contained in the rows. ## 2. Avoid adding a default value while adding a new column to an existing table diff --git a/server/documentation/tips.md b/server/documentation/tips.md index 39cb2263328a1..6a8adf948032e 100644 --- a/server/documentation/tips.md +++ b/server/documentation/tips.md @@ -52,7 +52,7 @@ We'll use the `TestGraphQLQueryBasicMSSQL` test as an example. ### Start-up graphql-engine -First step stays the same. Start up the relevant databases and graphql-engine in seperate terminals. +First step stays the same. Start up the relevant databases and graphql-engine in separate terminals. We also need mssql for this test, this can be skipped if you're testing postgres for example. @@ -70,7 +70,7 @@ scripts/dev.sh graphql-engine In the case of mssql, we also need to register the database. This can be done in the hasura console but going to the `DATA` tab, then `Manage` button on the left and then `Connect Database` button. Add a mssql database with the -connection string that `scripts/dev.sh mssql` outputed. +connection string that `scripts/dev.sh mssql` outputted. Note: the database name should match the `source` field that tests use. In mssql's case this is usually `mssql`. @@ -100,7 +100,7 @@ cat server/tests-py/queries/graphql_query/basic/setup_mssql.yaml | yaml2json | c We have two options: -1. Take the query from the test you like and run in in graphql. +1. Take the query from the test you like and run in graphql. 2. Extract the query into a separate file: `/tmp/query.yaml`: ```yaml query: | diff --git a/server/forks/hedis/TODO.md b/server/forks/hedis/TODO.md index 49af252d6e46c..6174e9ea33fff 100644 --- a/server/forks/hedis/TODO.md +++ b/server/forks/hedis/TODO.md @@ -55,7 +55,7 @@ function per command. - No real loss of safety for return types - can fail for arguments (list vs. non-empty) 2. "correct" container types for each Redis type: - - Non-empty lists (unconvenient syntax?) + - Non-empty lists (inconvenient syntax?) - Sets - Which set type, Set vs HashSet vs .. - Redis already returns "sets" in the sense that each element exists only @@ -67,7 +67,7 @@ function per command. ## Command Return Types Currently every command returns the result wrapped in `Maybe Reply`. This is -"what Redis returns" but it's unconvenient to use. +"what Redis returns" but it's inconvenient to use. * Return the results "unwrapped" and throw an exception when the reply is an error or can not be decoded to the desired return type. diff --git a/server/lib/api-tests/README.md b/server/lib/api-tests/README.md index 2c5efba895705..b634cf896216c 100644 --- a/server/lib/api-tests/README.md +++ b/server/lib/api-tests/README.md @@ -157,7 +157,7 @@ A typical test will look similar to this: There are times when you would want to debug a test failure by playing around with the Hasura's Graphql engine or by inspecting the database. The default behavior of the test suite is to drop all the -data and the tables onces the test suite finishes. To prevent that, +data and the tables once the test suite finishes. To prevent that, you can modify your test module to prevent teardown. Example: ```diff diff --git a/server/lib/ekg-prometheus/Tutorial.md b/server/lib/ekg-prometheus/Tutorial.md index 310594dcb090d..0c5b26b034281 100644 --- a/server/lib/ekg-prometheus/Tutorial.md +++ b/server/lib/ekg-prometheus/Tutorial.md @@ -189,7 +189,7 @@ Labels are useful for convenient filtering and aggregation of metric data. In `ekg-prometheus`, metrics are identified by both their name _and_ their label set, so metrics with the same name but different label sets are distinct and independent metrics. When working with labelled -metrics, the constructors of a metrics specification GADT corrrespond to +metrics, the constructors of a metrics specification GADT correspond to **classes** of metrics that share the same name. `ekg-prometheus` also has support for _structuring_ the representation @@ -365,7 +365,7 @@ deregistration of metrics: -- assert (sample3 == expectedSample3) $ pure () ``` -1. Deregistration handles were present in in all previous examples, +1. Deregistration handles were present in all previous examples, but we ignored them for simplicity. 1. The deregistration handle removes all metrics registered by the diff --git a/server/test-manual/postgres-replicas-with-ssl-client-certs/README.md b/server/test-manual/postgres-replicas-with-ssl-client-certs/README.md index 0845e18f7eb27..c22fc1762f991 100644 --- a/server/test-manual/postgres-replicas-with-ssl-client-certs/README.md +++ b/server/test-manual/postgres-replicas-with-ssl-client-certs/README.md @@ -3,16 +3,16 @@ This directory contains scripts that are useful to debug the handling of ssl certificates in a non-trivial setup involving a read-replica. -There are many many ways to put together a SSL setup. Rather than forming a +There are many ways to put together a SSL setup. Rather than forming a single comprehensive or prescriptive setup, these are instead useful building -blocks that can be used to as a starting point for a comlex setup. +blocks that can be used to as a starting point for a complex setup. We provide a script that can generate fresh server and client certificates, and a docker-compose file that uses these certificates to setup a database instance and a read-replica instance. The only authentication mechanism accepted by the databases is ssl client certificates. -Note that the ability to specify the SSL certficates to use for a data source +Note that the ability to specify the SSL certificates to use for a data source is an Enterprise Edition feature. ## Starting and stopping diff --git a/server/testing-guidelines.md b/server/testing-guidelines.md index 40053319edfd5..1cec234643597 100644 --- a/server/testing-guidelines.md +++ b/server/testing-guidelines.md @@ -49,7 +49,7 @@ For more information on integration tests, see its [README](./lib/api-tests/READ #### Adding property tests When adding property tests, it might be helpful to add some reasoning about how -you extracted the property you are testing. Often times, this can help clarify +you extracted the property you are testing. Oftentimes, this can help clarify the property. Are there any other related properties you can test? Secondly, you should consider the generator(s) you are using. Do they diff --git a/v3/crates/graphql/schema/README.md b/v3/crates/graphql/schema/README.md index 563ea8e7a9655..77fd34c87f912 100644 --- a/v3/crates/graphql/schema/README.md +++ b/v3/crates/graphql/schema/README.md @@ -1,4 +1,4 @@ # schema -Provides functions to resolve the Open DDS metadata, generate the GraphQL scehma +Provides functions to resolve the Open DDS metadata, generate the GraphQL schema from it, and execute queries against the schema. diff --git a/v3/crates/utils/opendds-derive/README.md b/v3/crates/utils/opendds-derive/README.md index 205d1a165feb4..340efc5c0f866 100644 --- a/v3/crates/utils/opendds-derive/README.md +++ b/v3/crates/utils/opendds-derive/README.md @@ -105,7 +105,7 @@ here. - `#[opendd(json_schema(default_exp = "some::function()"))]` - To be used in conjuction with [#[opendd(default)]](#field-level). The given + To be used in conjunction with [#[opendd(default)]](#field-level). The given function should return a json value which is included in the generated schema's `default`. Not needed when the field type has `serde::Serialize` trait implemented. The default JSON value will be inferred using diff --git a/v3/docs/architecture.md b/v3/docs/architecture.md index 8274a20dbf905..3edc4b2925b60 100644 --- a/v3/docs/architecture.md +++ b/v3/docs/architecture.md @@ -83,7 +83,7 @@ structures that are used in the `engine` crate for schema generation. ##### `graphql/schema` -Provides functions to resolve the Open DDS metadata, generate the GraphQL scehma +Provides functions to resolve the Open DDS metadata, generate the GraphQL schema from it, and execute queries against the schema. ##### `graphql/schema/operations` diff --git a/v3/docs/roles-and-annotations.md b/v3/docs/roles-and-annotations.md index a0b2aebe53961..39e4d7de98919 100644 --- a/v3/docs/roles-and-annotations.md +++ b/v3/docs/roles-and-annotations.md @@ -84,13 +84,13 @@ happens in the `normalize_request` function in `lang-graphql`. The important thing to know here is that `lang-graphql` code does not know about `GDS` or our `NamespaceAnnotation` type. All it has to act on is whether a -`Role` (or `Namespace`, to it's eyes) has a key in any namespaced annotations or +`Role` (or `Namespace`, to its eyes) has a key in any namespaced annotations or not. Because we're using associated types the contents are "protected" from `lang-graphql` and so it can't peek inside. We're going to want the `user_id` argument to disappear from `user-1`'s schema -- we do this by using `conditional_namespaced` when contructing schema for the +- we do this by using `conditional_namespaced` when contracting schema for the `user_id` command argument itself: ```rust diff --git a/v3/rfcs/aggregations.md b/v3/rfcs/aggregations.md index 654b19cd894ef..da036f9d53fcb 100644 --- a/v3/rfcs/aggregations.md +++ b/v3/rfcs/aggregations.md @@ -2364,7 +2364,7 @@ field's type (ie. an object `AggregationExpression` for object-typed fields, and a scalar `AggregationExpression` for scalar-typed fields). One can also enable or disable the `count`/`countDistinct` special-cased -aggregations. However, the `countDistinct` function can only be used if the the +aggregations. However, the `countDistinct` function can only be used if the `AggregateExpression` is not used on a Model, because you can't "distinctly" count rows in a collection (eg. `COUNT(*)` and `COUNT(DISTINCT *)` is the same). Whether or not distinct counts are possible will be need to be exposed in diff --git a/v3/rfcs/open-dd-boolean-expression-types.md b/v3/rfcs/open-dd-boolean-expression-types.md index 606e0175fefd3..d6ca1902b9072 100644 --- a/v3/rfcs/open-dd-boolean-expression-types.md +++ b/v3/rfcs/open-dd-boolean-expression-types.md @@ -127,7 +127,7 @@ definition: typeName: Author_bool_exp ``` -There would be too much repition and it would be the inconsistent with the rest +There would be too much repetition and it would be the inconsistent with the rest of OpenDD where you can only refer to explicitly defined types. ### Boolean expression types attached to kind: ObjectType