这是indexloc提供的服务,不要输入任何密码
Skip to content

[Doc] Improve the doc for sink #268

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Aug 24, 2023
Merged

Conversation

banmoy
Copy link
Collaborator

@banmoy banmoy commented Aug 16, 2023

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Which issues of this PR fixes :

Fixes #

Problem Summary(Required) :

  • consolidate the structure of the doc
  • make examples clear
  • correct parameter description
  • add best practices

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr will affect users' behaviors
  • This pr needs user documentation (for new or modified features or behaviors)
  • I have added documentation for my new feature or new function

banmoy and others added 7 commits August 16, 2023 11:38
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
| sink.properties.row_delimiter | No | \n | The row delimiter for CSV-formatted data. |
| sink.properties.column_separator | No | \t | The column separator for CSV-formatted data. |
| sink.properties.max_filter_ratio | No | 0 | The maximum error tolerance of the stream load. It's the maximum percentage of data records that can be filtered out due to inadequate data quality. Valid values: 0 to 1. Default value: 0. See [Stream Load](https://docs.starrocks.io/en-us/latest/sql-reference/sql-statements/data-manipulation/STREAM%20LOAD) for details. |
| sink.parallelism | No | NONE | The parallelism of the connector. Only available for Flink SQL. If not set, Flink planner will decide the parallelism. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

多并行度下如何保证数据是有序的

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要用户来保证,比如在Flink里通过keyBy保证同一个key的数据落到同一个并发上

@hellolilyliuyi hellolilyliuyi merged commit 653fbe0 into StarRocks:main Aug 24, 2023
banmoy added a commit to banmoy/starrocks-connector-for-apache-flink that referenced this pull request Aug 28, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
banmoy added a commit to banmoy/starrocks-connector-for-apache-flink that referenced this pull request Aug 28, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
banmoy added a commit to banmoy/starrocks-connector-for-apache-flink that referenced this pull request Aug 28, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
banmoy added a commit to banmoy/starrocks-connector-for-apache-flink that referenced this pull request Aug 28, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
banmoy added a commit that referenced this pull request Sep 11, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
banmoy added a commit that referenced this pull request Sep 11, 2023
Signed-off-by: PengFei Li <lpengfei2016@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants