Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatglm2_loar_tuning run raise NotImplementedError #11

Open
xieyongshuai opened this issue Jun 28, 2023 · 7 comments
Open

chatglm2_loar_tuning run raise NotImplementedError #11

xieyongshuai opened this issue Jun 28, 2023 · 7 comments

Comments

@xieyongshuai
Copy link

chatglm2_loar运行错误,提示glm2没有实现这个方法
model.enable_input_require_grads() NotImplementedError

@beyondguo
Copy link
Owner

完整报错,以及报错对应的代码片段贴一下

@ZacharyWaseda
Copy link

119行:model.enable_input_require_grads()

image

@beyondguo
Copy link
Owner

参考一下 THUDM/ChatGLM2-6B#51 (comment)

@xieyongshuai
Copy link
Author

@ZacharyWaseda 解决了没

@ZacharyWaseda
Copy link

@ZacharyWaseda 解决了没

解决了。那俩文件换成最新的就行了

@xieyongshuai
Copy link
Author

@ZacharyWaseda
model.hf_device_map['transformer.output_layer'] = model.hf_device_map['transformer.embedding']
这行要怎么改,求指教

@hanyi-zou
Copy link

我最新查了下官方HF上的repo更新,https://huggingface.co/THUDM/chatglm2-6b/commit/189e5df1609cdbd1704e7d0204301ad4c7791f61 看到已经修复了get_input_embeddings 的问题,你下载最新的modeling_chatglm.py 和 config.json 就可以正确运行了,也不需要注释。引用自THUDM/ChatGLM2-6B#51 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants