|  | 
| 664 | 664 |     - filename: Symiotic-14B.i1-Q4_K_M.gguf | 
| 665 | 665 |       sha256: 8f5d4ef4751877fb8982308f153a9bd2b72289eda83b18dd591c3c04ba91a407 | 
| 666 | 666 |       uri: huggingface://mradermacher/Symiotic-14B-i1-GGUF/Symiotic-14B.i1-Q4_K_M.gguf | 
|  | 667 | +- !!merge <<: *qwen3 | 
|  | 668 | +  name: "gryphe_pantheon-proto-rp-1.8-30b-a3b" | 
|  | 669 | +  icon: https://huggingface.co/Gryphe/Pantheon-Proto-RP-1.8-30B-A3B/resolve/main/Pantheon.png | 
|  | 670 | +  urls: | 
|  | 671 | +    - https://huggingface.co/Gryphe/Pantheon-Proto-RP-1.8-30B-A3B | 
|  | 672 | +    - https://huggingface.co/bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF | 
|  | 673 | +  description: | | 
|  | 674 | +    Note: This model is a Qwen 30B MoE prototype and can be considered a sidegrade from my Small release some time ago. It did not receive extensive testing beyond a couple benchmarks to determine its sanity, so feel free to let me know what you think of it! | 
|  | 675 | + | 
|  | 676 | +    Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. | 
|  | 677 | + | 
|  | 678 | +    Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. | 
|  | 679 | + | 
|  | 680 | +    GGUF quants are available here. | 
|  | 681 | + | 
|  | 682 | +    Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. | 
|  | 683 | +    Model details | 
|  | 684 | + | 
|  | 685 | +    Ever since Qwen 3 released I've been trying to get MoE finetuning to work - After countless frustrating days, much code hacking, etc etc I finally got a full finetune to complete with reasonable loss values. | 
|  | 686 | + | 
|  | 687 | +    I picked the base model for this since I didn't feel like trying to fight a reasoning model's training - Maybe someday I'll make a model which uses thinking tags for the character's thoughts or something. | 
|  | 688 | + | 
|  | 689 | +    This time the recipe focused on combining as many data sources as I possibly could, featuring synthetic data from Sonnet 3.5 + 3.7, ChatGPT 4o and Deepseek. These then went through an extensive rewriting pipeline to eliminate common AI cliches, with the hopeful intent of providing you a fresh experience. | 
|  | 690 | +  overrides: | 
|  | 691 | +    parameters: | 
|  | 692 | +      model: Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf | 
|  | 693 | +  files: | 
|  | 694 | +    - filename: Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf | 
|  | 695 | +      sha256: b72fe703a992fba9595c24b96737a2b5199da89a1a3870b8bd57746dc3c123ae | 
|  | 696 | +      uri: huggingface://bartowski/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-GGUF/Gryphe_Pantheon-Proto-RP-1.8-30B-A3B-Q4_K_M.gguf | 
| 667 | 697 | - &gemma3 | 
| 668 | 698 |   url: "github:mudler/LocalAI/gallery/gemma.yaml@master" | 
| 669 | 699 |   name: "gemma-3-27b-it" | 
|  | 
0 commit comments