Spaces:
Sleeping
Sleeping
| title: Chat with DeepSeek Coder 1.3B | |
| emoji: 💬 | |
| colorFrom: blue | |
| colorTo: indigo | |
| sdk: gradio | |
| sdk_version: 4.25.0 # Or your target Gradio version | |
| python_version: "3.10" # Or "3.9", "3.11", etc. | |
| app_file: app.py | |
| pinned: false | |
| # If you were using secrets (like HF_TOKEN for an API): | |
| # secrets: | |
| # - HF_TOKEN | |
| # If you needed persistent storage: | |
| # persistent_storage: | |
| # - path: /data | |
| # mount_to: /home/user/app/data # Example mount path | |
| # Chat with Quantized DeepSeek Coder 1.3B | |
| This is a Gradio application that allows you to chat with the `deepseek-ai/deepseek-coder-1.3b-instruct` model. | |
| The model is loaded directly within the Space. If a GPU is available, 8-bit quantization is attempted; otherwise, the model is loaded on CPU in its native precision. | |
| **Note:** Model loading can take a few minutes, especially on the first run or after a rebuild. | |