Torch版本为: 2.5.0 Torchvision版本为:: 0.20.0
(llama-env) xxx@super:~/llama/LLaMA-Factory$ python3 -c “import torch; print(f’Torch: {torch.version}‘)”
Torch: 2.5.0a0+872d972e41.nv24.08
(llama-env) xxx@super:~/llama/LLaMA-Factory$ python3 -c “import torchvision; print(f’Torchvision: {torchvision.version}’)”
Torchvision: 0.20.0a0+afc54f7
使用jetson orin运行命令llamafactory-cli时报错:
Traceback (most recent call last):
File “/home/zcc/llama-env/lib/python3.10/site-packages/transformers/utils/import_utils.py”, line 2302, in getattr
module = self._get_module(self._class_to_module[name])
File “/home/zcc/llama-env/lib/python3.10/site-packages/transformers/utils/import_utils.py”, line 2332, in _get_module
raise e
File “/home/zcc/llama-env/lib/python3.10/site-packages/transformers/utils/import_utils.py”, line 2330, in _get_module
return importlib.import_module(“.” + module_name, self.name)
File “/usr/lib/python3.10/importlib/init.py”, line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File “”, line 1050, in _gcd_import
File “”, line 1027, in _find_and_load
File “”, line 1006, in _find_and_load_unlocked
File “”, line 688, in _load_unlocked
File “”, line 883, in exec_module
File “”, line 241, in _call_with_frames_removed
File “/home/zcc/llama-env/lib/python3.10/site-packages/transformers/trainer_seq2seq.py”, line 22, in
from torch.distributed.fsdp import FullyShardedDataParallel
File “/home/zcc/llama-env/lib/python3.10/site-packages/torch/distributed/fsdp/init.py”, line 1, in
from ._flat_param import FlatParameter as FlatParameter
File “/home/zcc/llama-env/lib/python3.10/site-packages/torch/distributed/fsdp/_flat_param.py”, line 45, in
from torch.testing._internal.distributed.fake_pg import FakeProcessGroup
File “/home/zcc/llama-env/lib/python3.10/site-packages/torch/testing/_internal/distributed/fake_pg.py”, line 5, in
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named ‘torch._C._distributed_c10d’; ‘torch._C’ is not a package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “/home/zcc/llama-env/bin/llamafactory-cli”, line 7, in
sys.exit(main())
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/cli.py”, line 39, in main
from . import launcher
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/launcher.py”, line 15, in
from llamafactory.train.tuner import run_exp # use absolute import
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/train/tuner.py”, line 36, in
from .sft import run_sft
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/train/sft/init.py”, line 15, in
from .workflow import run_sft
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/train/sft/workflow.py”, line 28, in
from .trainer import CustomSeq2SeqTrainer
File “/home/zcc/llama/LLaMA-Factory/src/llamafactory/train/sft/trainer.py”, line 25, in
from transformers import Seq2SeqTrainer
File “/home/zcc/llama-env/lib/python3.10/site-packages/transformers/utils/import_utils.py”, line 2305, in getattr
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import module ‘Seq2SeqTrainer’. Are this object’s requirements defined correctly?