Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add conversions for mixed precision matmuls. #32

Merged
merged 1 commit into from
Jul 2, 2024

Conversation

ienkovich
Copy link
Collaborator

This allows us to run most of test_dot variations on CPU. Those are quite slow though because run in a single thread, so I reduced input sizes for CPU.

@ienkovich ienkovich requested a review from minjang June 25, 2024 14:47
@ienkovich ienkovich requested a review from ptillet as a code owner June 25, 2024 14:47
@minjang minjang requested a review from digantdesai June 25, 2024 18:17
@aregm aregm linked an issue Jun 26, 2024 that may be closed by this pull request
Copy link
Collaborator

@minjang minjang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late review. Looks good.

@minjang minjang merged commit 4641be5 into triton-lang:main Jul 2, 2024
2 of 5 checks passed
Devjiu pushed a commit to Devjiu/triton-cpu that referenced this pull request Aug 13, 2024
int3 pushed a commit that referenced this pull request Aug 29, 2024
minjang pushed a commit that referenced this pull request Sep 22, 2024
minjang pushed a commit that referenced this pull request Oct 22, 2024
minjang pushed a commit that referenced this pull request Oct 24, 2024
int3 pushed a commit that referenced this pull request Dec 6, 2024
ienkovich added a commit that referenced this pull request Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Supporting mixed precisions in matmul
2 participants