这是indexloc提供的服务,不要输入任何密码
Skip to content

Releases: hasura/ndc-mongodb

v1.8.4

23 Jul 21:44
6e8ba43
Compare
Choose a tag to compare

Fixed

  • Escape field names or aliases with invalid characters in field selections (#175)

v1.8.3

08 Jul 22:28
9dd4b16
Compare
Choose a tag to compare

Fixed

  • Filtering on field of related collection inside nested object that is not selected for output (#171)
  • Ensure correct ordering of sort criteria in MongoDB query plan (#172)

v1.8.2

16 Jun 02:48
9cbf7c5
Compare
Choose a tag to compare

Added

  • Enable support for the MONGODB-AWS authentication mechanism.

v1.8.1

05 Jun 03:44
f4f3b8e
Compare
Choose a tag to compare

Fixed

  • Include TLS root certificates in docker images to fix connections to otel collectors (#167)

Root certificates

Connections to MongoDB use the Rust MongoDB driver, which uses rust-tls, which bundles its own root certificate store.
So there was no problem connecting to MongoDB over TLS. But the connector's OpenTelemetry library uses openssl instead
of rust-tls, and openssl requires a separate certificate store to be installed. So this release fixes connections to
OpenTelemetry collectors over https.

v1.8.0

28 Apr 16:57
cdf780a
Compare
Choose a tag to compare

Added

  • Add option to skip rows on response type mismatch (#162)

Option to skip rows on response type mismatch

When sending response data for a query if we encounter a value that does not match the type declared in the connector
schema the default behavior is to respond with an error. That prevents the user from getting any data. This change adds
an option to silently skip rows that contain type mismatches so that the user can get a partial set of result data.

This can come up if, for example, you have database documents with a field that nearly always contains an int value,
but in a handful of cases that field contains a string. Introspection may determine that the type of the field is
int if the random document sampling does not happen to check one of the documents with a string. Then when you run
a query that does read one of those documents the query fails because the connector refuses to return a value of an
unexpected type.

The new option, onResponseTypeMismatch, has two possible values: fail (the existing, default behavior), or skipRow
(the new, opt-in behavior). If you set the option to skipRow in the example case above the connector will silently
exclude documents with unexpected string values in the response. This allows you to get access to the "good" data.
This is opt-in because we don't want to exclude data if users are not aware that might be happening.

The option is set in connector configuration in configuration.json. Here is an example configuration:

{
  "introspectionOptions": {
    "sampleSize": 1000,
    "noValidatorSchema": false,
    "allSchemaNullable": false
  },
  "serializationOptions": {
    "extendedJsonMode": "relaxed",
    "onResponseTypeMismatch": "skipRow"
  }
}

The skipRow behavior does not affect aggregations, or queries that do not request the field with the unexpected type.

v1.7.2

17 Apr 00:34
c9a11e4
Compare
Choose a tag to compare

Fixed

  • Database introspection no longer fails if any individual collection cannot be sampled (#160)

v1.7.1

12 Mar 15:33
fcc66ef
Compare
Choose a tag to compare

Added

  • Add watch command while initializing metadata (#157)

Changed

Fixed

v1.7.0

10 Mar 21:42
Compare
Choose a tag to compare

Added

  • Add uuid scalar type (#148)

Changed

  • On database introspection newly-added collection fields will be added to existing schema configurations (#152)

Fixed

  • Update dependencies to get fixes for reported security vulnerabilities (#149)

Changes to database introspection

Previously running introspection would not update existing schema definitions, it would only add definitions for
newly-added collections. This release changes that behavior to make conservative changes to existing definitions:

  • added fields, either top-level or nested, will be added to existing schema definitions
  • types for fields that are already configured will not be changed automatically
  • fields that appear to have been added to collections will not be removed from configurations

We take such a conservative approach to schema configuration changes because we want to avoid accidental breaking API
changes, and because schema configuration can be edited by hand, and we don't want to accidentally reverse such
modifications.

If you want to make type changes to fields that are already configured, or if you want to remove fields from schema
configuration you can either make those edits to schema configurations by hand, or you can delete schema files before
running introspection.

UUID scalar type

Previously UUID values would show up in GraphQL as BinData. BinData is a generalized BSON type for binary data. It
doesn't provide a great interface for working with UUIDs because binary data must be given as a JSON object with binary
data in base64-encoding (while UUIDs are usually given in a specific hex-encoded string format), and there is also
a mandatory "subtype" field. For example a BinData value representing a UUID fetched via GraphQL looks like this:

{ "base64": "QKaT0MAKQl2vXFNeN/3+nA==", "subType":"04" }

With this change UUID fields can use the new uuid type instead of binData. Values of type uuid are represented in
JSON as strings. The same value in a field with type uuid looks like this:

"40a693d0-c00a-425d-af5c-535e37fdfe9c"

This means that you can now, for example, filter using string representations for UUIDs:

query {
  posts(where: {id: {_eq: "40a693d0-c00a-425d-af5c-535e37fdfe9c"}}) {
    title
  }
}

Introspection has been updated so that database fields containing UUIDs will use the uuid type when setting up new
collections, or when re-introspecting after deleting the existing schema configuration. For migrating you may delete and
re-introspect, or edit schema files to change occurrences of binData to uuid.

Security Fixes

Rust dependencies have been updated to get fixes for these advisories:

v1.6.0

21 Jan 21:29
71c739c
Compare
Choose a tag to compare

Added

  • You can now aggregate values in nested object fields (#136)

Changed

  • Result types for aggregation operations other than count are now nullable (#136)

Fixed

  • Upgrade dependencies to get fix for RUSTSEC-2024-0421, a vulnerability in domain name comparisons (#138)
  • Aggregations on empty document sets now produce null instead of failing with an error (#136)
  • Handle collection validators with object fields that do not list properties (#140)

Fix for RUSTSEC-2024-0421 / CVE-2024-12224

Updates dependencies to upgrade the library, idna, to get a version that is not
affected by a vulnerability reported in [RUSTSEC-2024-0421][].

The vulnerability allows an attacker to craft a domain name that older versions
of idna interpret as identical to a legitimate domain name, but that is in fact
a different name. We do not expect that this impacts the MongoDB connector since
it uses the affected library exclusively to connect to MongoDB databases, and
database URLs are supplied by trusted administrators. But better to be safe than
sorry.

Validators with object fields that do not list properties

If a collection validator species an property of type object, but does not specify a list of nested properties for that object then we will infer the ExtendedJSON type for that property. For a collection created with this set of options would have the type ExtendedJSON for its reactions field:

{
  "validator": {
    "$jsonSchema": {
      "bsonType": "object",
      "properties": {
        "reactions": { "bsonType": "object" },
      }
    }
  }
}

If the validator specifies a map of nested properties, but that map is empty, then we interpret that as an empty object type.
[RUSTSEC-2024-0421]: https://rustsec.org/advisories/RUSTSEC-2024-0421

v1.5.0

06 Dec 21:33
b95da18
Compare
Choose a tag to compare

Added

  • Adds CLI command to manage native queries with automatic type inference (#131)

Changed

  • Updates MongoDB Rust driver from v2.8 to v3.1.0 (#124)

Fixed

  • The connector previously used Cloudflare's DNS resolver. Now it uses the locally-configured DNS resolver. (#125)
  • Fixed connector not picking up configuration changes when running locally using the ddn CLI workflow. (#133)

Managing native queries with the CLI

New in this release is a CLI plugin command to create, list, inspect, and delete
native queries. A big advantage of using the command versus writing native query
configurations by hand is that the command will type-check your query's
aggregation pipeline, and will write type declarations automatically.

This is a BETA feature - it is a work in progress, and will not work for all
cases. It is safe to experiment with since it is limited to managing native
query configuration files, and does not lock you into anything.

You can run the new command like this:

$ ddn connector plugin --connector app/connector/my_connector/connector.yaml -- native-query

To create a native query create a file with a .json extension that contains
the aggregation pipeline for you query. For example this pipeline in
title_word_frequency.json outputs frequency counts for words appearing in
movie titles in a given year:

[
  {
    "$match": {
      "year": "{{ year }}"
    }
  },
  { 
    "$replaceWith": {
      "title_words": { "$split": ["$title", " "] }
    }
  },
  { "$unwind": { "path": "$title_words" } },
  { 
    "$group": {
      "_id": "$title_words",
      "count": { "$count": {} }
    }
  }
]

In your supergraph directory run a command like this using the path to the pipeline file as an argument,

$ ddn connector plugin --connector app/connector/my_connector/connector.yaml -- native-query create title_word_frequency.json --collection movies

You should see output like this:

Wrote native query configuration to your-project/connector/native_queries/title_word_frequency.json

input collection: movies
representation: collection