+
Skip to content

Conversation

nkoppel
Copy link
Contributor

@nkoppel nkoppel commented Jan 20, 2023

So far, implements a more efficient sum_to kernel that will have a maximum write contention of the number of blocks (groups of 1024 threads) running concurrently. Operations within each block scale with log2(min(chunk_size, block_size)). Resolves #332, and will depend on #380 for @ViliamVadocz 's fix to atomicMaxf and atomicMinf.

@nkoppel nkoppel marked this pull request as draft January 20, 2023 17:52
@nkoppel nkoppel marked this pull request as ready for review January 20, 2023 19:23
@nkoppel
Copy link
Contributor Author

nkoppel commented Jan 20, 2023

I've now finished the min_to and max_to kernels.

@nkoppel nkoppel changed the title WIP Efficient cuda kernels for reductions Efficient cuda kernels for reductions Jan 20, 2023
Copy link
Owner

@coreylowman coreylowman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks for this contribution! Just have some questions to make sure I'm following 🚀

Copy link
Owner

@coreylowman coreylowman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome changes, thanks for contribution!

@coreylowman coreylowman merged commit 1fedba0 into coreylowman:main Jan 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Optimize cuda sum_to kernel for full reductions

2 participants

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载