Abstract: This study attempts to explore and implement a lightweight optimization strategy for large language models in resource-constrained environments. Specifically, it uses model distillation ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results