Usb Insane V0.1 Ps2 61

0 views
Skip to first unread message

Kum Verna

unread,
Aug 18, 2024, 10:51:41 PM8/18/24
to biahumphboumis

Hi,
First question and Third: It's less about Disk Space and more about Ram (usually it's almost the same quantity), MistralAI states on that Mixtral requires 100GO of VRAM (GPU) for exemple. However, this can be reduced through the use of quantized modelsinstead of the original (will come back at it on a second).
Second: To run the original model you will require 2 GPU's, as the VRAM necessary is an insane amount (can be reduced with quantization).
Fourth: I don't think so, you can however fine tune it or use a fine tuned version for a specific task you want it to do.
For your last questions: MistralAI itself as an API available on their website that you can use, and many other websites have also endpoints you can use !

usb insane v0.1 ps2 61


Download https://lomogd.com/2A2YUi



Now about quantization, there is a well respected user on hugging face named TheBloke that shares quantized versions of popular models.
Here: -8x7B-Instruct-v0.1-GGUF, for example, there are GGUF versions that you can actually run with a powerfull CPU and some of them with way less ram (the smaller one required 18.14 GB compared to the 100 of the original one...), so it really depends on what you need and your budget honnestly !

I was not expected opensource llm reach such level so fast because 9 moths ago openAI claimed opensource models with the power of GPT 3.5 will be available in 5 years ... (that MoE model is something totally a new level ) ... that is insane and if we are thinking that soon will be finetuned versions ... crazy.
Plus that works fast on my rtx 3090 ... 40 token /s and has 32k context
I wonder if a new llama 3 will be also MoE model ;D

I was not expected opensource llm reach such level so fast (that MoE model is something totally a new level ) ... that is insane if we are thinking that soon will be finetuned versions ... crazy.
Plus that works fast on my rtx 3090 ... 40 token /s

So this post is just to gently nudge any writers of those mods who may be in this community, to consider updating their mods. Pretty please - there are quite a few nices ones there I would love to continue to use.

Of course, I am not certain that all of them actually work yet (but I know quite a few of them do as I have been using them all along). I would love ML-2 compatible versions but until they come along, sticking with the old mod loader is the only viable option.

I personally think this game is bugged badly and the difficulty spikes are insane - some of the edits I did in the JSON files have a far bigger impact on making the game easier than the relatively minor changes you pointed out chuckle.

b37509886e
Reply all
Reply to author
Forward
0 new messages