Skip to content

Conversation

serhii-nakon
Copy link

@serhii-nakon serhii-nakon commented Jun 13, 2025

@serhii-nakon serhii-nakon force-pushed the remove_llama_kv_cache_view branch from fe246ca to 15cc7e2 Compare June 13, 2025 17:23
@serhii-nakon
Copy link
Author

serhii-nakon commented Jun 13, 2025

By some reason even with this fixes it crashes - if it possible review it. Maybe it because of another fix in my local env (related with template strftime_now)

@serhii-nakon
Copy link
Author

OK, I reverted back llama.cpp - it start works with fixes in this PR. Looks like here some more issues with new llama.cpp and llama-cpp-python compatibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant