Skip to content

Conversation

@polym
Copy link
Contributor

@polym polym commented Sep 6, 2023

Add git to full-cuda.Dockerfile main-cuda.Dockerfile which is required in scripts/build.sh.

@polym
Copy link
Contributor Author

polym commented Sep 6, 2023

@canardleteer PTAL

Copy link
Contributor

@canardleteer canardleteer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. It appears that scripts/build-info.sh was added sometime in May, adding a dependency on git, which is why these CUDA Dockerfile's broke.

This, and similar breakage, will continue to happen until there are CUDA workers available for this projects CI validation automation, since it's not on the automated testing path.

Given that there's significant CUDA usage in llama.cpp now, it may be a worthy effort for this project to attempt to go and try to find a donation of some. I don't currently have the time to volunteer for this effort (but wish I did!).

  • Managed GitHub GPU runners are on the GitHub Roadmap.
  • But since that's incomplete, we'd probably need to find someone offering hosting & manage it until it becomes available through GitHub.

Thanks @polym!

@ggerganov
Copy link
Member

I have already setup an Azure-based CI with a lot of credits available (thanks to AI Grant).
There is currently 1 GPU runner on master: ggml-org / ggml-4-x86-cuda-v100

We can add more and improve the CI with more tests

@ggerganov ggerganov merged commit a21baeb into ggml-org:master Sep 8, 2023
@canardleteer
Copy link
Contributor

We can add more and improve the CI with more tests

Good to know @ggerganov! When I get a chance, I'll see if I can put something together that can work with this setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants