Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions examples/infill/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,11 @@ The `infill` program offers a seamless way to interact with LLaMA models, allowi

### Example

Download a CodeLlama model:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarify that we download Codellama because it has infill support.

We can improve this by using latest CodeGemma models - I think they are smaller and also have infill support. It would be also a good exercise to verify that we support them

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've update clarifying the usage of CodeLlama now 👍

Regarding CodeGemma, I tried out a smaller model codegemma-2b-f16.gguf which did not produce a good result. I'll try a larger model, codegemma-7b-it-f16.gguf and see if it works (if I can run it). One thing I've noticed is that these are models are gated so I was not able to use the hf script to download them but had to manually download them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, looking into this a little closer it looks like CodeGemma is using different ids for special tokens where as the ones that are specified in llama.cpp currently are:

id special_prefix_id = 32007;
id special_middle_id = 32009;
id special_suffix_id = 32008;
id special_eot_id    = 32010;

For CodeGemma models perhaps this should be something like:

id special_prefix_id = 67;
id special_middle_id = 68;
id special_suffix_id = 69;
id special_eot_id    = 70;

Should there be a check for CodeGemma model name in llm_load_vocab to handle this perhaps, or what would be a good way to address this issue?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, this should be fixed - likely we need to add these special tokens in the GGUF meta data and start using them. Could you verify that using the correct token ids (by hardcoding them) produces correct results?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you verify that using the correct token ids (by hardcoding them) produces correct results?

The following is the output when using the special token ids from CodeGemma:

./infill -t 10 -ngl 15 -m models/codegemma-7b-it-f16.gguf -c 4096 --temp 0 --repeat_penalty 1.0 -n 20 --in-prefix "def helloworld():\n    print(\"hell" --in-suffix "\n   print(\"goodbye world\")\n    "
...

#####  Infill mode  #####

<|fim_prefix|> def helloworld():\n    print("hell<|fim_suffix|> \n   print("goodbye world")\n    <|fim_middle|>
o world")\n   print("goodbye world")\n   print("hello world")\<|file_separator|>

This looks better I think and more inline with that the CodeLlama model outputs.
I'd be happy to take a stab at adding these special tokens to the GGUF meta data, if that is alright?

Copy link
Member

@ggerganov ggerganov Apr 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add these in the GGUF meta and then use inside llama.cpp.

Btw, the example that you posted seems kind of ok, but I would have expected it to after finishing the hello world print to create a new function def goodbyeworld():. There might be some other issues still

Copy link
Member Author

@danbev danbev Apr 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add these in the GGUF meta and then use inside llama.cpp.

Could you give me a few pointer as to where this meta data is added? Is it added to gguf.py, like in constants.py similar to the other special tokens like BOS_ID, EOS_ID, UNK_ID etc?
I think I've figured this out and will dig in a little more and open a pull request.

There might be some other issues still

I'll take a closer look at this as part of adding the metadata and see if I can figure out what is wrong.
I just realized that I downloaded the instruction following (it) model which is not the code completion model 😞

Using the code completion CodeGemma I get the following output:

$ ./infill -t 10 -ngl 0 -m models/codegemma-7b-f16.gguf -c 4096 --temp 0.7 --repeat_penalty 1.1 -n 20 --in-prefix "def helloworld():\n    print(\"hell" --in-suffix "\n   print(\"goodbye world\")\n    "
...

#####  Infill mode  #####

<|fim_prefix|> def helloworld():\n    print("hell<|fim_suffix|> \n   print("goodbye world")\n    <|fim_middle|>o,world!")<|file_separator|>

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've opened #6689 with a suggestion.

```console
scripts/hf.sh --repo TheBloke/CodeLlama-13B-GGUF --file codellama-13b.Q5_K_S.gguf --outdir models
```

```bash
./infill -t 10 -ngl 0 -m models/codellama-13b.Q5_K_S.gguf -c 4096 --temp 0.7 --repeat_penalty 1.1 -n 20 --in-prefix "def helloworld():\n print(\"hell" --in-suffix "\n print(\"goodbye world\")\n "
```